00:00:00.001 Started by upstream project "autotest-per-patch" build number 132540 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.246 > git --version # 'git version 2.39.2' 00:00:00.246 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.364 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.377 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.389 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.389 > git config core.sparsecheckout # timeout=10 00:00:06.402 > git read-tree -mu HEAD # timeout=10 00:00:06.421 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.445 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.445 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.535 [Pipeline] Start of Pipeline 00:00:06.549 [Pipeline] library 00:00:06.550 Loading library shm_lib@master 00:00:06.550 Library shm_lib@master is cached. Copying from home. 00:00:06.566 [Pipeline] node 00:00:06.574 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.576 [Pipeline] { 00:00:06.585 [Pipeline] catchError 00:00:06.587 [Pipeline] { 00:00:06.596 [Pipeline] wrap 00:00:06.603 [Pipeline] { 00:00:06.611 [Pipeline] stage 00:00:06.613 [Pipeline] { (Prologue) 00:00:06.825 [Pipeline] sh 00:00:07.111 + logger -p user.info -t JENKINS-CI 00:00:07.138 [Pipeline] echo 00:00:07.140 Node: CYP9 00:00:07.150 [Pipeline] sh 00:00:07.458 [Pipeline] setCustomBuildProperty 00:00:07.471 [Pipeline] echo 00:00:07.473 Cleanup processes 00:00:07.480 [Pipeline] sh 00:00:07.771 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.771 3294314 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.784 [Pipeline] sh 00:00:08.089 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.089 ++ grep -v 'sudo pgrep' 00:00:08.089 ++ awk '{print $1}' 00:00:08.089 + sudo kill -9 00:00:08.089 + true 00:00:08.115 [Pipeline] cleanWs 00:00:08.125 [WS-CLEANUP] Deleting project workspace... 00:00:08.125 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.133 [WS-CLEANUP] done 00:00:08.148 [Pipeline] setCustomBuildProperty 00:00:08.163 [Pipeline] sh 00:00:08.445 + sudo git config --global --replace-all safe.directory '*' 00:00:08.525 [Pipeline] httpRequest 00:00:08.902 [Pipeline] echo 00:00:08.903 Sorcerer 10.211.164.20 is alive 00:00:08.911 [Pipeline] retry 00:00:08.912 [Pipeline] { 00:00:08.921 [Pipeline] httpRequest 00:00:08.924 HttpMethod: GET 00:00:08.924 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.925 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.944 Response Code: HTTP/1.1 200 OK 00:00:08.944 Success: Status code 200 is in the accepted range: 200,404 00:00:08.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.764 [Pipeline] } 00:00:13.783 [Pipeline] // retry 00:00:13.791 [Pipeline] sh 00:00:14.082 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.101 [Pipeline] httpRequest 00:00:14.502 [Pipeline] echo 00:00:14.504 Sorcerer 10.211.164.20 is alive 00:00:14.513 [Pipeline] retry 00:00:14.515 [Pipeline] { 00:00:14.529 [Pipeline] httpRequest 00:00:14.534 HttpMethod: GET 00:00:14.534 URL: http://10.211.164.20/packages/spdk_0617ba6b21606d11ec01e9cf835cc8f635270e28.tar.gz 00:00:14.535 Sending request to url: http://10.211.164.20/packages/spdk_0617ba6b21606d11ec01e9cf835cc8f635270e28.tar.gz 00:00:14.555 Response Code: HTTP/1.1 200 OK 00:00:14.556 Success: Status code 200 is in the accepted range: 200,404 00:00:14.556 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0617ba6b21606d11ec01e9cf835cc8f635270e28.tar.gz 00:01:07.451 [Pipeline] } 00:01:07.470 [Pipeline] // retry 00:01:07.478 [Pipeline] sh 00:01:07.768 + tar --no-same-owner -xf spdk_0617ba6b21606d11ec01e9cf835cc8f635270e28.tar.gz 00:01:11.085 [Pipeline] sh 00:01:11.373 + git -C spdk log --oneline -n5 00:01:11.373 0617ba6b2 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:11.373 bb877d8c1 nvmf: Expose DIF type of namespace to host again 00:01:11.373 9f3071c5f nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:11.373 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:11.373 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:11.386 [Pipeline] } 00:01:11.399 [Pipeline] // stage 00:01:11.407 [Pipeline] stage 00:01:11.409 [Pipeline] { (Prepare) 00:01:11.420 [Pipeline] writeFile 00:01:11.430 [Pipeline] sh 00:01:11.714 + logger -p user.info -t JENKINS-CI 00:01:11.728 [Pipeline] sh 00:01:12.017 + logger -p user.info -t JENKINS-CI 00:01:12.030 [Pipeline] sh 00:01:12.314 + cat autorun-spdk.conf 00:01:12.314 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.314 SPDK_TEST_NVMF=1 00:01:12.314 SPDK_TEST_NVME_CLI=1 00:01:12.314 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.314 SPDK_TEST_NVMF_NICS=e810 00:01:12.314 SPDK_TEST_VFIOUSER=1 00:01:12.314 SPDK_RUN_UBSAN=1 00:01:12.314 NET_TYPE=phy 00:01:12.322 RUN_NIGHTLY=0 00:01:12.327 [Pipeline] readFile 00:01:12.353 [Pipeline] withEnv 00:01:12.355 [Pipeline] { 00:01:12.367 [Pipeline] sh 00:01:12.757 + set -ex 00:01:12.757 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.757 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.757 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.757 ++ SPDK_TEST_NVMF=1 00:01:12.757 ++ SPDK_TEST_NVME_CLI=1 00:01:12.757 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.757 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.757 ++ SPDK_TEST_VFIOUSER=1 00:01:12.757 ++ SPDK_RUN_UBSAN=1 00:01:12.757 ++ NET_TYPE=phy 00:01:12.757 ++ RUN_NIGHTLY=0 00:01:12.757 + case $SPDK_TEST_NVMF_NICS in 00:01:12.757 + DRIVERS=ice 00:01:12.757 + [[ tcp == \r\d\m\a ]] 00:01:12.757 + [[ -n ice ]] 00:01:12.757 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.757 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.757 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.757 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.757 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.757 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.757 + true 00:01:12.757 + for D in $DRIVERS 00:01:12.757 + sudo modprobe ice 00:01:12.757 + exit 0 00:01:12.768 [Pipeline] } 00:01:12.783 [Pipeline] // withEnv 00:01:12.789 [Pipeline] } 00:01:12.803 [Pipeline] // stage 00:01:12.813 [Pipeline] catchError 00:01:12.815 [Pipeline] { 00:01:12.829 [Pipeline] timeout 00:01:12.829 Timeout set to expire in 1 hr 0 min 00:01:12.830 [Pipeline] { 00:01:12.841 [Pipeline] stage 00:01:12.843 [Pipeline] { (Tests) 00:01:12.855 [Pipeline] sh 00:01:13.147 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.147 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.147 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.147 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.147 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.147 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.147 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.147 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.147 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.147 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.147 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.147 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.147 + source /etc/os-release 00:01:13.147 ++ NAME='Fedora Linux' 00:01:13.147 ++ VERSION='39 (Cloud Edition)' 00:01:13.147 ++ ID=fedora 00:01:13.147 ++ VERSION_ID=39 00:01:13.147 ++ VERSION_CODENAME= 00:01:13.147 ++ PLATFORM_ID=platform:f39 00:01:13.147 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:13.147 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.147 ++ LOGO=fedora-logo-icon 00:01:13.147 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:13.147 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.147 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:13.147 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.147 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.147 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.147 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:13.147 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.147 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:13.147 ++ SUPPORT_END=2024-11-12 00:01:13.147 ++ VARIANT='Cloud Edition' 00:01:13.147 ++ VARIANT_ID=cloud 00:01:13.147 + uname -a 00:01:13.147 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:13.147 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:16.450 Hugepages 00:01:16.450 node hugesize free / total 00:01:16.450 node0 1048576kB 0 / 0 00:01:16.450 node0 2048kB 0 / 0 00:01:16.450 node1 1048576kB 0 / 0 00:01:16.450 node1 2048kB 0 / 0 00:01:16.450 00:01:16.450 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.450 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:16.450 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:16.450 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:16.450 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:16.450 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:16.450 + rm -f /tmp/spdk-ld-path 00:01:16.450 + source autorun-spdk.conf 00:01:16.450 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.450 ++ SPDK_TEST_NVMF=1 00:01:16.450 ++ SPDK_TEST_NVME_CLI=1 00:01:16.450 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.450 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.450 ++ SPDK_TEST_VFIOUSER=1 00:01:16.450 ++ SPDK_RUN_UBSAN=1 00:01:16.450 ++ NET_TYPE=phy 00:01:16.450 ++ RUN_NIGHTLY=0 00:01:16.450 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.450 + [[ -n '' ]] 00:01:16.450 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.450 + for M in /var/spdk/build-*-manifest.txt 00:01:16.450 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:16.450 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.450 + for M in /var/spdk/build-*-manifest.txt 00:01:16.450 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.450 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.450 + for M in /var/spdk/build-*-manifest.txt 00:01:16.450 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.450 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.450 ++ uname 00:01:16.450 + [[ Linux == \L\i\n\u\x ]] 00:01:16.450 + sudo dmesg -T 00:01:16.450 + sudo dmesg --clear 00:01:16.450 + dmesg_pid=3295863 00:01:16.450 + [[ Fedora Linux == FreeBSD ]] 00:01:16.450 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.450 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.450 + sudo dmesg -Tw 00:01:16.450 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.450 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.450 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:16.450 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.450 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.450 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.450 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.450 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.450 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.450 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.450 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.450 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.450 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.450 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.450 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.712 19:39:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.712 19:39:17 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:16.712 19:39:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:16.712 19:39:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:16.712 19:39:17 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.712 19:39:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:16.712 19:39:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.712 19:39:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:16.712 19:39:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.712 19:39:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.712 19:39:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.712 19:39:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.712 19:39:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.712 19:39:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.712 19:39:17 -- paths/export.sh@5 -- $ export PATH 00:01:16.712 19:39:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.712 19:39:17 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.712 19:39:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:16.712 19:39:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732646357.XXXXXX 00:01:16.712 19:39:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732646357.uA3BLG 00:01:16.712 19:39:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:16.712 19:39:17 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:16.712 19:39:17 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.712 19:39:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.712 19:39:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.712 19:39:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:16.712 19:39:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:16.712 19:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.713 19:39:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.713 19:39:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:16.713 19:39:17 -- pm/common@17 -- $ local monitor 00:01:16.713 19:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.713 19:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.713 19:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.713 19:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.713 19:39:17 -- pm/common@21 -- $ date +%s 00:01:16.713 19:39:17 -- pm/common@25 -- $ sleep 1 00:01:16.713 19:39:17 -- pm/common@21 -- $ date +%s 00:01:16.713 19:39:17 -- pm/common@21 -- $ date +%s 00:01:16.713 19:39:17 -- pm/common@21 -- $ date +%s 00:01:16.713 19:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732646357 00:01:16.713 19:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732646357 00:01:16.713 19:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732646357 00:01:16.713 19:39:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732646357 00:01:16.713 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732646357_collect-cpu-load.pm.log 00:01:16.713 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732646357_collect-vmstat.pm.log 00:01:16.713 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732646357_collect-cpu-temp.pm.log 00:01:16.713 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732646357_collect-bmc-pm.bmc.pm.log 00:01:17.656 19:39:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:17.656 19:39:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.656 19:39:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.656 19:39:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.656 19:39:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.656 Tue Nov 26 06:39:18 PM UTC 2024 00:01:17.656 19:39:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.656 v25.01-pre-272-g0617ba6b2 00:01:17.656 19:39:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.656 19:39:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.656 19:39:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.656 19:39:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:17.656 19:39:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:17.656 19:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.918 ************************************ 00:01:17.918 START TEST ubsan 00:01:17.918 ************************************ 00:01:17.918 19:39:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:17.918 using ubsan 00:01:17.918 00:01:17.918 real 0m0.001s 00:01:17.918 user 0m0.000s 00:01:17.918 sys 0m0.001s 00:01:17.918 19:39:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:17.918 19:39:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.918 ************************************ 00:01:17.918 END TEST ubsan 00:01:17.918 ************************************ 00:01:17.918 19:39:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.918 19:39:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.918 19:39:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.918 19:39:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.918 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.918 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.496 Using 'verbs' RDMA provider 00:01:34.346 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.581 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.153 Creating mk/config.mk...done. 00:01:47.153 Creating mk/cc.flags.mk...done. 00:01:47.153 Type 'make' to build. 00:01:47.153 19:39:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:47.153 19:39:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:47.153 19:39:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:47.153 19:39:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.153 ************************************ 00:01:47.153 START TEST make 00:01:47.153 ************************************ 00:01:47.153 19:39:47 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:47.726 make[1]: Nothing to be done for 'all'. 00:01:49.106 The Meson build system 00:01:49.106 Version: 1.5.0 00:01:49.106 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:49.106 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.106 Build type: native build 00:01:49.106 Project name: libvfio-user 00:01:49.106 Project version: 0.0.1 00:01:49.106 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:49.106 C linker for the host machine: cc ld.bfd 2.40-14 00:01:49.106 Host machine cpu family: x86_64 00:01:49.106 Host machine cpu: x86_64 00:01:49.106 Run-time dependency threads found: YES 00:01:49.106 Library dl found: YES 00:01:49.106 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:49.106 Run-time dependency json-c found: YES 0.17 00:01:49.106 Run-time dependency cmocka found: YES 1.1.7 00:01:49.106 Program pytest-3 found: NO 00:01:49.106 Program flake8 found: NO 00:01:49.106 Program misspell-fixer found: NO 00:01:49.106 Program restructuredtext-lint found: NO 00:01:49.106 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.106 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.106 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.106 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.106 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.106 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.106 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.106 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.106 Build targets in project: 8 00:01:49.106 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.106 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.106 00:01:49.106 libvfio-user 0.0.1 00:01:49.106 00:01:49.106 User defined options 00:01:49.106 buildtype : debug 00:01:49.106 default_library: shared 00:01:49.106 libdir : /usr/local/lib 00:01:49.106 00:01:49.106 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.366 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.627 [1/37] Compiling C object samples/null.p/null.c.o 00:01:49.627 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:49.627 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:49.627 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:49.627 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:49.627 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:49.627 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:49.627 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:49.627 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:49.627 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:49.627 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:49.627 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:49.627 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:49.627 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:49.627 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:49.627 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:49.627 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:49.627 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:49.627 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:49.627 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:49.627 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:49.627 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:49.627 [23/37] Compiling C object samples/server.p/server.c.o 00:01:49.627 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:49.627 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:49.627 [26/37] Compiling C object samples/client.p/client.c.o 00:01:49.627 [27/37] Linking target samples/client 00:01:49.627 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:49.627 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:49.888 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:49.888 [31/37] Linking target test/unit_tests 00:01:49.888 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:49.888 [33/37] Linking target samples/null 00:01:49.888 [34/37] Linking target samples/lspci 00:01:49.888 [35/37] Linking target samples/server 00:01:49.888 [36/37] Linking target samples/gpio-pci-idio-16 00:01:49.888 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:49.888 INFO: autodetecting backend as ninja 00:01:49.888 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.149 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.408 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.408 ninja: no work to do. 00:01:57.000 The Meson build system 00:01:57.000 Version: 1.5.0 00:01:57.000 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:57.000 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:57.000 Build type: native build 00:01:57.000 Program cat found: YES (/usr/bin/cat) 00:01:57.000 Project name: DPDK 00:01:57.000 Project version: 24.03.0 00:01:57.000 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.000 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.000 Host machine cpu family: x86_64 00:01:57.000 Host machine cpu: x86_64 00:01:57.000 Message: ## Building in Developer Mode ## 00:01:57.000 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.000 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.000 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.000 Program python3 found: YES (/usr/bin/python3) 00:01:57.000 Program cat found: YES (/usr/bin/cat) 00:01:57.000 Compiler for C supports arguments -march=native: YES 00:01:57.000 Checking for size of "void *" : 8 00:01:57.000 Checking for size of "void *" : 8 (cached) 00:01:57.000 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:57.000 Library m found: YES 00:01:57.000 Library numa found: YES 00:01:57.000 Has header "numaif.h" : YES 00:01:57.000 Library fdt found: NO 00:01:57.000 Library execinfo found: NO 00:01:57.000 Has header "execinfo.h" : YES 00:01:57.000 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.000 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.000 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.000 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.000 Run-time dependency openssl found: YES 3.1.1 00:01:57.000 Run-time dependency libpcap found: YES 1.10.4 00:01:57.000 Has header "pcap.h" with dependency libpcap: YES 00:01:57.000 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.000 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.000 Compiler for C supports arguments -Wformat: YES 00:01:57.000 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.000 Compiler for C supports arguments -Wformat-security: NO 00:01:57.000 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.000 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.000 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.000 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.000 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.000 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.000 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.000 Compiler for C supports arguments -Wundef: YES 00:01:57.000 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.000 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.000 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.000 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.000 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.000 Program objdump found: YES (/usr/bin/objdump) 00:01:57.000 Compiler for C supports arguments -mavx512f: YES 00:01:57.000 Checking if "AVX512 checking" compiles: YES 00:01:57.000 Fetching value of define "__SSE4_2__" : 1 00:01:57.000 Fetching value of define "__AES__" : 1 00:01:57.000 Fetching value of define "__AVX__" : 1 00:01:57.000 Fetching value of define "__AVX2__" : 1 00:01:57.000 Fetching value of define "__AVX512BW__" : 1 00:01:57.000 Fetching value of define "__AVX512CD__" : 1 00:01:57.000 Fetching value of define "__AVX512DQ__" : 1 00:01:57.000 Fetching value of define "__AVX512F__" : 1 00:01:57.000 Fetching value of define "__AVX512VL__" : 1 00:01:57.000 Fetching value of define "__PCLMUL__" : 1 00:01:57.000 Fetching value of define "__RDRND__" : 1 00:01:57.000 Fetching value of define "__RDSEED__" : 1 00:01:57.000 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:57.001 Fetching value of define "__znver1__" : (undefined) 00:01:57.001 Fetching value of define "__znver2__" : (undefined) 00:01:57.001 Fetching value of define "__znver3__" : (undefined) 00:01:57.001 Fetching value of define "__znver4__" : (undefined) 00:01:57.001 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.001 Message: lib/log: Defining dependency "log" 00:01:57.001 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.001 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.001 Checking for function "getentropy" : NO 00:01:57.001 Message: lib/eal: Defining dependency "eal" 00:01:57.001 Message: lib/ring: Defining dependency "ring" 00:01:57.001 Message: lib/rcu: Defining dependency "rcu" 00:01:57.001 Message: lib/mempool: Defining dependency "mempool" 00:01:57.001 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.001 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.001 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:57.001 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:57.001 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:57.001 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:57.001 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:57.001 Compiler for C supports arguments -mpclmul: YES 00:01:57.001 Compiler for C supports arguments -maes: YES 00:01:57.001 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.001 Compiler for C supports arguments -mavx512bw: YES 00:01:57.001 Compiler for C supports arguments -mavx512dq: YES 00:01:57.001 Compiler for C supports arguments -mavx512vl: YES 00:01:57.001 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.001 Compiler for C supports arguments -mavx2: YES 00:01:57.001 Compiler for C supports arguments -mavx: YES 00:01:57.001 Message: lib/net: Defining dependency "net" 00:01:57.001 Message: lib/meter: Defining dependency "meter" 00:01:57.001 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.001 Message: lib/pci: Defining dependency "pci" 00:01:57.001 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.001 Message: lib/hash: Defining dependency "hash" 00:01:57.001 Message: lib/timer: Defining dependency "timer" 00:01:57.001 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.001 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.001 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.001 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.001 Message: lib/power: Defining dependency "power" 00:01:57.001 Message: lib/reorder: Defining dependency "reorder" 00:01:57.001 Message: lib/security: Defining dependency "security" 00:01:57.001 Has header "linux/userfaultfd.h" : YES 00:01:57.001 Has header "linux/vduse.h" : YES 00:01:57.001 Message: lib/vhost: Defining dependency "vhost" 00:01:57.001 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.001 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.001 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.001 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.001 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.001 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.001 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.001 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.001 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.001 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.001 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:57.001 Configuring doxy-api-html.conf using configuration 00:01:57.001 Configuring doxy-api-man.conf using configuration 00:01:57.001 Program mandb found: YES (/usr/bin/mandb) 00:01:57.001 Program sphinx-build found: NO 00:01:57.001 Configuring rte_build_config.h using configuration 00:01:57.001 Message: 00:01:57.001 ================= 00:01:57.001 Applications Enabled 00:01:57.001 ================= 00:01:57.001 00:01:57.001 apps: 00:01:57.001 00:01:57.001 00:01:57.001 Message: 00:01:57.001 ================= 00:01:57.001 Libraries Enabled 00:01:57.001 ================= 00:01:57.001 00:01:57.001 libs: 00:01:57.001 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.001 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.001 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.001 00:01:57.001 Message: 00:01:57.001 =============== 00:01:57.001 Drivers Enabled 00:01:57.001 =============== 00:01:57.001 00:01:57.001 common: 00:01:57.001 00:01:57.001 bus: 00:01:57.001 pci, vdev, 00:01:57.001 mempool: 00:01:57.001 ring, 00:01:57.001 dma: 00:01:57.001 00:01:57.001 net: 00:01:57.001 00:01:57.001 crypto: 00:01:57.001 00:01:57.001 compress: 00:01:57.001 00:01:57.001 vdpa: 00:01:57.001 00:01:57.001 00:01:57.001 Message: 00:01:57.001 ================= 00:01:57.001 Content Skipped 00:01:57.001 ================= 00:01:57.001 00:01:57.001 apps: 00:01:57.001 dumpcap: explicitly disabled via build config 00:01:57.001 graph: explicitly disabled via build config 00:01:57.001 pdump: explicitly disabled via build config 00:01:57.001 proc-info: explicitly disabled via build config 00:01:57.001 test-acl: explicitly disabled via build config 00:01:57.001 test-bbdev: explicitly disabled via build config 00:01:57.001 test-cmdline: explicitly disabled via build config 00:01:57.001 test-compress-perf: explicitly disabled via build config 00:01:57.001 test-crypto-perf: explicitly disabled via build config 00:01:57.001 test-dma-perf: explicitly disabled via build config 00:01:57.001 test-eventdev: explicitly disabled via build config 00:01:57.001 test-fib: explicitly disabled via build config 00:01:57.001 test-flow-perf: explicitly disabled via build config 00:01:57.001 test-gpudev: explicitly disabled via build config 00:01:57.001 test-mldev: explicitly disabled via build config 00:01:57.001 test-pipeline: explicitly disabled via build config 00:01:57.001 test-pmd: explicitly disabled via build config 00:01:57.001 test-regex: explicitly disabled via build config 00:01:57.001 test-sad: explicitly disabled via build config 00:01:57.001 test-security-perf: explicitly disabled via build config 00:01:57.001 00:01:57.001 libs: 00:01:57.001 argparse: explicitly disabled via build config 00:01:57.001 metrics: explicitly disabled via build config 00:01:57.001 acl: explicitly disabled via build config 00:01:57.001 bbdev: explicitly disabled via build config 00:01:57.001 bitratestats: explicitly disabled via build config 00:01:57.001 bpf: explicitly disabled via build config 00:01:57.001 cfgfile: explicitly disabled via build config 00:01:57.001 distributor: explicitly disabled via build config 00:01:57.001 efd: explicitly disabled via build config 00:01:57.001 eventdev: explicitly disabled via build config 00:01:57.001 dispatcher: explicitly disabled via build config 00:01:57.001 gpudev: explicitly disabled via build config 00:01:57.001 gro: explicitly disabled via build config 00:01:57.001 gso: explicitly disabled via build config 00:01:57.001 ip_frag: explicitly disabled via build config 00:01:57.001 jobstats: explicitly disabled via build config 00:01:57.001 latencystats: explicitly disabled via build config 00:01:57.002 lpm: explicitly disabled via build config 00:01:57.002 member: explicitly disabled via build config 00:01:57.002 pcapng: explicitly disabled via build config 00:01:57.002 rawdev: explicitly disabled via build config 00:01:57.002 regexdev: explicitly disabled via build config 00:01:57.002 mldev: explicitly disabled via build config 00:01:57.002 rib: explicitly disabled via build config 00:01:57.002 sched: explicitly disabled via build config 00:01:57.002 stack: explicitly disabled via build config 00:01:57.002 ipsec: explicitly disabled via build config 00:01:57.002 pdcp: explicitly disabled via build config 00:01:57.002 fib: explicitly disabled via build config 00:01:57.002 port: explicitly disabled via build config 00:01:57.002 pdump: explicitly disabled via build config 00:01:57.002 table: explicitly disabled via build config 00:01:57.002 pipeline: explicitly disabled via build config 00:01:57.002 graph: explicitly disabled via build config 00:01:57.002 node: explicitly disabled via build config 00:01:57.002 00:01:57.002 drivers: 00:01:57.002 common/cpt: not in enabled drivers build config 00:01:57.002 common/dpaax: not in enabled drivers build config 00:01:57.002 common/iavf: not in enabled drivers build config 00:01:57.002 common/idpf: not in enabled drivers build config 00:01:57.002 common/ionic: not in enabled drivers build config 00:01:57.002 common/mvep: not in enabled drivers build config 00:01:57.002 common/octeontx: not in enabled drivers build config 00:01:57.002 bus/auxiliary: not in enabled drivers build config 00:01:57.002 bus/cdx: not in enabled drivers build config 00:01:57.002 bus/dpaa: not in enabled drivers build config 00:01:57.002 bus/fslmc: not in enabled drivers build config 00:01:57.002 bus/ifpga: not in enabled drivers build config 00:01:57.002 bus/platform: not in enabled drivers build config 00:01:57.002 bus/uacce: not in enabled drivers build config 00:01:57.002 bus/vmbus: not in enabled drivers build config 00:01:57.002 common/cnxk: not in enabled drivers build config 00:01:57.002 common/mlx5: not in enabled drivers build config 00:01:57.002 common/nfp: not in enabled drivers build config 00:01:57.002 common/nitrox: not in enabled drivers build config 00:01:57.002 common/qat: not in enabled drivers build config 00:01:57.002 common/sfc_efx: not in enabled drivers build config 00:01:57.002 mempool/bucket: not in enabled drivers build config 00:01:57.002 mempool/cnxk: not in enabled drivers build config 00:01:57.002 mempool/dpaa: not in enabled drivers build config 00:01:57.002 mempool/dpaa2: not in enabled drivers build config 00:01:57.002 mempool/octeontx: not in enabled drivers build config 00:01:57.002 mempool/stack: not in enabled drivers build config 00:01:57.002 dma/cnxk: not in enabled drivers build config 00:01:57.002 dma/dpaa: not in enabled drivers build config 00:01:57.002 dma/dpaa2: not in enabled drivers build config 00:01:57.002 dma/hisilicon: not in enabled drivers build config 00:01:57.002 dma/idxd: not in enabled drivers build config 00:01:57.002 dma/ioat: not in enabled drivers build config 00:01:57.002 dma/skeleton: not in enabled drivers build config 00:01:57.002 net/af_packet: not in enabled drivers build config 00:01:57.002 net/af_xdp: not in enabled drivers build config 00:01:57.002 net/ark: not in enabled drivers build config 00:01:57.002 net/atlantic: not in enabled drivers build config 00:01:57.002 net/avp: not in enabled drivers build config 00:01:57.002 net/axgbe: not in enabled drivers build config 00:01:57.002 net/bnx2x: not in enabled drivers build config 00:01:57.002 net/bnxt: not in enabled drivers build config 00:01:57.002 net/bonding: not in enabled drivers build config 00:01:57.002 net/cnxk: not in enabled drivers build config 00:01:57.002 net/cpfl: not in enabled drivers build config 00:01:57.002 net/cxgbe: not in enabled drivers build config 00:01:57.002 net/dpaa: not in enabled drivers build config 00:01:57.002 net/dpaa2: not in enabled drivers build config 00:01:57.002 net/e1000: not in enabled drivers build config 00:01:57.002 net/ena: not in enabled drivers build config 00:01:57.002 net/enetc: not in enabled drivers build config 00:01:57.002 net/enetfec: not in enabled drivers build config 00:01:57.002 net/enic: not in enabled drivers build config 00:01:57.002 net/failsafe: not in enabled drivers build config 00:01:57.002 net/fm10k: not in enabled drivers build config 00:01:57.002 net/gve: not in enabled drivers build config 00:01:57.002 net/hinic: not in enabled drivers build config 00:01:57.002 net/hns3: not in enabled drivers build config 00:01:57.002 net/i40e: not in enabled drivers build config 00:01:57.002 net/iavf: not in enabled drivers build config 00:01:57.002 net/ice: not in enabled drivers build config 00:01:57.002 net/idpf: not in enabled drivers build config 00:01:57.002 net/igc: not in enabled drivers build config 00:01:57.002 net/ionic: not in enabled drivers build config 00:01:57.002 net/ipn3ke: not in enabled drivers build config 00:01:57.002 net/ixgbe: not in enabled drivers build config 00:01:57.002 net/mana: not in enabled drivers build config 00:01:57.002 net/memif: not in enabled drivers build config 00:01:57.002 net/mlx4: not in enabled drivers build config 00:01:57.002 net/mlx5: not in enabled drivers build config 00:01:57.002 net/mvneta: not in enabled drivers build config 00:01:57.002 net/mvpp2: not in enabled drivers build config 00:01:57.002 net/netvsc: not in enabled drivers build config 00:01:57.002 net/nfb: not in enabled drivers build config 00:01:57.002 net/nfp: not in enabled drivers build config 00:01:57.002 net/ngbe: not in enabled drivers build config 00:01:57.002 net/null: not in enabled drivers build config 00:01:57.002 net/octeontx: not in enabled drivers build config 00:01:57.002 net/octeon_ep: not in enabled drivers build config 00:01:57.002 net/pcap: not in enabled drivers build config 00:01:57.002 net/pfe: not in enabled drivers build config 00:01:57.002 net/qede: not in enabled drivers build config 00:01:57.002 net/ring: not in enabled drivers build config 00:01:57.002 net/sfc: not in enabled drivers build config 00:01:57.002 net/softnic: not in enabled drivers build config 00:01:57.002 net/tap: not in enabled drivers build config 00:01:57.002 net/thunderx: not in enabled drivers build config 00:01:57.002 net/txgbe: not in enabled drivers build config 00:01:57.002 net/vdev_netvsc: not in enabled drivers build config 00:01:57.002 net/vhost: not in enabled drivers build config 00:01:57.002 net/virtio: not in enabled drivers build config 00:01:57.002 net/vmxnet3: not in enabled drivers build config 00:01:57.002 raw/*: missing internal dependency, "rawdev" 00:01:57.002 crypto/armv8: not in enabled drivers build config 00:01:57.002 crypto/bcmfs: not in enabled drivers build config 00:01:57.002 crypto/caam_jr: not in enabled drivers build config 00:01:57.002 crypto/ccp: not in enabled drivers build config 00:01:57.002 crypto/cnxk: not in enabled drivers build config 00:01:57.002 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.002 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.002 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.002 crypto/mlx5: not in enabled drivers build config 00:01:57.002 crypto/mvsam: not in enabled drivers build config 00:01:57.002 crypto/nitrox: not in enabled drivers build config 00:01:57.002 crypto/null: not in enabled drivers build config 00:01:57.002 crypto/octeontx: not in enabled drivers build config 00:01:57.002 crypto/openssl: not in enabled drivers build config 00:01:57.002 crypto/scheduler: not in enabled drivers build config 00:01:57.002 crypto/uadk: not in enabled drivers build config 00:01:57.002 crypto/virtio: not in enabled drivers build config 00:01:57.002 compress/isal: not in enabled drivers build config 00:01:57.002 compress/mlx5: not in enabled drivers build config 00:01:57.002 compress/nitrox: not in enabled drivers build config 00:01:57.002 compress/octeontx: not in enabled drivers build config 00:01:57.002 compress/zlib: not in enabled drivers build config 00:01:57.002 regex/*: missing internal dependency, "regexdev" 00:01:57.002 ml/*: missing internal dependency, "mldev" 00:01:57.002 vdpa/ifc: not in enabled drivers build config 00:01:57.002 vdpa/mlx5: not in enabled drivers build config 00:01:57.002 vdpa/nfp: not in enabled drivers build config 00:01:57.002 vdpa/sfc: not in enabled drivers build config 00:01:57.002 event/*: missing internal dependency, "eventdev" 00:01:57.003 baseband/*: missing internal dependency, "bbdev" 00:01:57.003 gpu/*: missing internal dependency, "gpudev" 00:01:57.003 00:01:57.003 00:01:57.003 Build targets in project: 84 00:01:57.003 00:01:57.003 DPDK 24.03.0 00:01:57.003 00:01:57.003 User defined options 00:01:57.003 buildtype : debug 00:01:57.003 default_library : shared 00:01:57.003 libdir : lib 00:01:57.003 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.003 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.003 c_link_args : 00:01:57.003 cpu_instruction_set: native 00:01:57.003 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:57.003 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:57.003 enable_docs : false 00:01:57.003 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:57.003 enable_kmods : false 00:01:57.003 max_lcores : 128 00:01:57.003 tests : false 00:01:57.003 00:01:57.003 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.003 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:57.003 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.003 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.003 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.003 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.003 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:57.003 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.003 [7/267] Linking static target lib/librte_kvargs.a 00:01:57.003 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:57.003 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.003 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:57.003 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:57.003 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:57.003 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.003 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:57.003 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:57.003 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:57.003 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.003 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:57.003 [19/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:57.003 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:57.003 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:57.003 [22/267] Linking static target lib/librte_log.a 00:01:57.003 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:57.003 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:57.262 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.262 [26/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.262 [27/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.262 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:57.262 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.262 [30/267] Linking static target lib/librte_pci.a 00:01:57.262 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:57.262 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.262 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:57.262 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.262 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.262 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.262 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.262 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.520 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.520 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.520 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.520 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.520 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.520 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.520 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.520 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.520 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.520 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.520 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.520 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.520 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.520 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.520 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.520 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.520 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.520 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.520 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.520 [58/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.520 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.520 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.520 [61/267] Linking static target lib/librte_telemetry.a 00:01:57.520 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.520 [63/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.520 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.520 [65/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.520 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.520 [67/267] Linking static target lib/librte_meter.a 00:01:57.520 [68/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.520 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.521 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.521 [71/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.521 [72/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.521 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.521 [74/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:57.521 [75/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:57.521 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.521 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:57.521 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.521 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.521 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.521 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.521 [82/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.521 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.521 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.521 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.521 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.521 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:57.521 [88/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.521 [89/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.521 [90/267] Linking static target lib/librte_ring.a 00:01:57.521 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.521 [92/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.521 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.521 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.521 [95/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.521 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.521 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.521 [98/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.521 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.521 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.521 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.521 [102/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.521 [103/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.521 [104/267] Linking static target lib/librte_timer.a 00:01:57.521 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.521 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.521 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.521 [108/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.521 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.521 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.521 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.521 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.521 [113/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.521 [114/267] Linking static target lib/librte_cmdline.a 00:01:57.521 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.521 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.521 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.521 [118/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.521 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.521 [120/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.521 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.521 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.521 [123/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.521 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.521 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.521 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:57.521 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:57.521 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.782 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.782 [130/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.782 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.782 [132/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.782 [133/267] Linking static target lib/librte_rcu.a 00:01:57.782 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.782 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.782 [136/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.782 [137/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.782 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.782 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.782 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.782 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.782 [142/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.782 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.782 [144/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.782 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.782 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.782 [147/267] Linking static target lib/librte_mempool.a 00:01:57.782 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.782 [149/267] Linking static target lib/librte_dmadev.a 00:01:57.782 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.782 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.782 [152/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.782 [153/267] Linking target lib/librte_log.so.24.1 00:01:57.782 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.782 [155/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.782 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.782 [157/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.782 [158/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.782 [159/267] Linking static target lib/librte_net.a 00:01:57.782 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.782 [161/267] Linking static target lib/librte_compressdev.a 00:01:57.782 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.782 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.782 [164/267] Linking static target lib/librte_reorder.a 00:01:57.782 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.782 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.782 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.782 [168/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.782 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.782 [170/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.782 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.782 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.782 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.782 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.782 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.782 [176/267] Linking static target lib/librte_security.a 00:01:57.782 [177/267] Linking static target lib/librte_eal.a 00:01:57.782 [178/267] Linking static target lib/librte_power.a 00:01:57.782 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.782 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.782 [181/267] Linking static target lib/librte_mbuf.a 00:01:57.782 [182/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.782 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:57.782 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.782 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.782 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.782 [187/267] Linking static target drivers/librte_bus_vdev.a 00:01:57.782 [188/267] Linking target lib/librte_kvargs.so.24.1 00:01:58.043 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.043 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.043 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.043 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.043 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.043 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.043 [195/267] Linking static target lib/librte_hash.a 00:01:58.043 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.043 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.043 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.043 [199/267] Linking static target drivers/librte_mempool_ring.a 00:01:58.043 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.043 [201/267] Linking static target drivers/librte_bus_pci.a 00:01:58.043 [202/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.043 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.043 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.043 [205/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.043 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.043 [207/267] Linking static target lib/librte_cryptodev.a 00:01:58.043 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.043 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:58.043 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.043 [211/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.305 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.305 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.305 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.566 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.566 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.566 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.566 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.566 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.566 [220/267] Linking static target lib/librte_ethdev.a 00:01:58.826 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.826 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.826 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.826 [224/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.826 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.086 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.657 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.657 [228/267] Linking static target lib/librte_vhost.a 00:02:00.229 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.613 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.196 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.579 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.579 [233/267] Linking target lib/librte_eal.so.24.1 00:02:09.579 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:09.579 [235/267] Linking target lib/librte_ring.so.24.1 00:02:09.579 [236/267] Linking target lib/librte_meter.so.24.1 00:02:09.579 [237/267] Linking target lib/librte_pci.so.24.1 00:02:09.579 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:09.579 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:09.579 [240/267] Linking target lib/librte_timer.so.24.1 00:02:09.579 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.579 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.579 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.845 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.845 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.845 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:09.845 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:09.846 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.846 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.846 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.846 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.846 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:10.105 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:10.105 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:10.105 [255/267] Linking target lib/librte_net.so.24.1 00:02:10.105 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:10.105 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:10.367 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:10.367 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:10.367 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:10.367 [261/267] Linking target lib/librte_hash.so.24.1 00:02:10.367 [262/267] Linking target lib/librte_security.so.24.1 00:02:10.367 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:10.367 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:10.367 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:10.629 [266/267] Linking target lib/librte_power.so.24.1 00:02:10.629 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:10.629 INFO: autodetecting backend as ninja 00:02:10.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:13.933 CC lib/ut/ut.o 00:02:13.933 CC lib/ut_mock/mock.o 00:02:13.933 CC lib/log/log.o 00:02:13.933 CC lib/log/log_flags.o 00:02:13.933 CC lib/log/log_deprecated.o 00:02:13.933 LIB libspdk_ut.a 00:02:13.933 LIB libspdk_ut_mock.a 00:02:13.933 LIB libspdk_log.a 00:02:13.933 SO libspdk_ut_mock.so.6.0 00:02:13.933 SO libspdk_ut.so.2.0 00:02:13.933 SO libspdk_log.so.7.1 00:02:13.933 SYMLINK libspdk_ut_mock.so 00:02:13.933 SYMLINK libspdk_ut.so 00:02:13.933 SYMLINK libspdk_log.so 00:02:14.504 CC lib/dma/dma.o 00:02:14.504 CC lib/util/base64.o 00:02:14.504 CC lib/util/bit_array.o 00:02:14.504 CC lib/util/cpuset.o 00:02:14.504 CC lib/util/crc16.o 00:02:14.504 CC lib/ioat/ioat.o 00:02:14.504 CC lib/util/crc32.o 00:02:14.504 CXX lib/trace_parser/trace.o 00:02:14.504 CC lib/util/crc32c.o 00:02:14.504 CC lib/util/crc32_ieee.o 00:02:14.504 CC lib/util/crc64.o 00:02:14.504 CC lib/util/dif.o 00:02:14.504 CC lib/util/fd.o 00:02:14.504 CC lib/util/fd_group.o 00:02:14.504 CC lib/util/file.o 00:02:14.504 CC lib/util/hexlify.o 00:02:14.504 CC lib/util/iov.o 00:02:14.504 CC lib/util/math.o 00:02:14.504 CC lib/util/net.o 00:02:14.504 CC lib/util/pipe.o 00:02:14.504 CC lib/util/strerror_tls.o 00:02:14.504 CC lib/util/string.o 00:02:14.504 CC lib/util/uuid.o 00:02:14.504 CC lib/util/xor.o 00:02:14.504 CC lib/util/zipf.o 00:02:14.504 CC lib/util/md5.o 00:02:14.504 CC lib/vfio_user/host/vfio_user_pci.o 00:02:14.504 CC lib/vfio_user/host/vfio_user.o 00:02:14.765 LIB libspdk_dma.a 00:02:14.765 SO libspdk_dma.so.5.0 00:02:14.765 LIB libspdk_ioat.a 00:02:14.765 SO libspdk_ioat.so.7.0 00:02:14.765 SYMLINK libspdk_dma.so 00:02:14.765 SYMLINK libspdk_ioat.so 00:02:14.765 LIB libspdk_vfio_user.a 00:02:15.027 SO libspdk_vfio_user.so.5.0 00:02:15.027 LIB libspdk_util.a 00:02:15.027 SYMLINK libspdk_vfio_user.so 00:02:15.027 SO libspdk_util.so.10.1 00:02:15.288 SYMLINK libspdk_util.so 00:02:15.288 LIB libspdk_trace_parser.a 00:02:15.288 SO libspdk_trace_parser.so.6.0 00:02:15.549 SYMLINK libspdk_trace_parser.so 00:02:15.549 CC lib/conf/conf.o 00:02:15.549 CC lib/json/json_parse.o 00:02:15.549 CC lib/json/json_util.o 00:02:15.549 CC lib/json/json_write.o 00:02:15.549 CC lib/env_dpdk/env.o 00:02:15.549 CC lib/idxd/idxd.o 00:02:15.549 CC lib/env_dpdk/memory.o 00:02:15.549 CC lib/env_dpdk/pci.o 00:02:15.549 CC lib/idxd/idxd_user.o 00:02:15.549 CC lib/rdma_utils/rdma_utils.o 00:02:15.549 CC lib/vmd/vmd.o 00:02:15.549 CC lib/env_dpdk/init.o 00:02:15.549 CC lib/idxd/idxd_kernel.o 00:02:15.549 CC lib/env_dpdk/threads.o 00:02:15.549 CC lib/vmd/led.o 00:02:15.549 CC lib/env_dpdk/pci_ioat.o 00:02:15.549 CC lib/env_dpdk/pci_virtio.o 00:02:15.549 CC lib/env_dpdk/pci_vmd.o 00:02:15.549 CC lib/env_dpdk/pci_idxd.o 00:02:15.549 CC lib/env_dpdk/pci_event.o 00:02:15.549 CC lib/env_dpdk/sigbus_handler.o 00:02:15.549 CC lib/env_dpdk/pci_dpdk.o 00:02:15.549 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:15.549 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:15.810 LIB libspdk_conf.a 00:02:15.810 SO libspdk_conf.so.6.0 00:02:15.810 LIB libspdk_rdma_utils.a 00:02:15.810 LIB libspdk_json.a 00:02:15.810 SYMLINK libspdk_conf.so 00:02:15.810 SO libspdk_rdma_utils.so.1.0 00:02:15.810 SO libspdk_json.so.6.0 00:02:16.071 SYMLINK libspdk_rdma_utils.so 00:02:16.071 SYMLINK libspdk_json.so 00:02:16.071 LIB libspdk_idxd.a 00:02:16.071 LIB libspdk_vmd.a 00:02:16.071 SO libspdk_idxd.so.12.1 00:02:16.071 SO libspdk_vmd.so.6.0 00:02:16.071 SYMLINK libspdk_idxd.so 00:02:16.333 SYMLINK libspdk_vmd.so 00:02:16.333 CC lib/rdma_provider/common.o 00:02:16.333 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:16.333 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.333 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.333 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.333 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.595 LIB libspdk_rdma_provider.a 00:02:16.595 LIB libspdk_jsonrpc.a 00:02:16.595 SO libspdk_rdma_provider.so.7.0 00:02:16.595 SO libspdk_jsonrpc.so.6.0 00:02:16.595 SYMLINK libspdk_rdma_provider.so 00:02:16.595 SYMLINK libspdk_jsonrpc.so 00:02:16.857 LIB libspdk_env_dpdk.a 00:02:16.857 SO libspdk_env_dpdk.so.15.1 00:02:16.857 SYMLINK libspdk_env_dpdk.so 00:02:17.119 CC lib/rpc/rpc.o 00:02:17.380 LIB libspdk_rpc.a 00:02:17.380 SO libspdk_rpc.so.6.0 00:02:17.380 SYMLINK libspdk_rpc.so 00:02:17.641 CC lib/notify/notify.o 00:02:17.641 CC lib/notify/notify_rpc.o 00:02:17.641 CC lib/trace/trace.o 00:02:17.641 CC lib/trace/trace_flags.o 00:02:17.641 CC lib/trace/trace_rpc.o 00:02:17.641 CC lib/keyring/keyring.o 00:02:17.641 CC lib/keyring/keyring_rpc.o 00:02:17.901 LIB libspdk_notify.a 00:02:17.901 SO libspdk_notify.so.6.0 00:02:17.901 LIB libspdk_keyring.a 00:02:17.901 LIB libspdk_trace.a 00:02:17.901 SYMLINK libspdk_notify.so 00:02:18.161 SO libspdk_keyring.so.2.0 00:02:18.161 SO libspdk_trace.so.11.0 00:02:18.161 SYMLINK libspdk_keyring.so 00:02:18.161 SYMLINK libspdk_trace.so 00:02:18.423 CC lib/thread/thread.o 00:02:18.423 CC lib/sock/sock.o 00:02:18.423 CC lib/thread/iobuf.o 00:02:18.423 CC lib/sock/sock_rpc.o 00:02:18.994 LIB libspdk_sock.a 00:02:18.994 SO libspdk_sock.so.10.0 00:02:18.994 SYMLINK libspdk_sock.so 00:02:19.256 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:19.257 CC lib/nvme/nvme_ctrlr.o 00:02:19.257 CC lib/nvme/nvme_fabric.o 00:02:19.257 CC lib/nvme/nvme_ns_cmd.o 00:02:19.257 CC lib/nvme/nvme_ns.o 00:02:19.257 CC lib/nvme/nvme_pcie_common.o 00:02:19.257 CC lib/nvme/nvme_pcie.o 00:02:19.257 CC lib/nvme/nvme_qpair.o 00:02:19.257 CC lib/nvme/nvme.o 00:02:19.257 CC lib/nvme/nvme_quirks.o 00:02:19.257 CC lib/nvme/nvme_transport.o 00:02:19.257 CC lib/nvme/nvme_discovery.o 00:02:19.257 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.257 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.257 CC lib/nvme/nvme_tcp.o 00:02:19.257 CC lib/nvme/nvme_opal.o 00:02:19.257 CC lib/nvme/nvme_io_msg.o 00:02:19.257 CC lib/nvme/nvme_poll_group.o 00:02:19.257 CC lib/nvme/nvme_zns.o 00:02:19.257 CC lib/nvme/nvme_stubs.o 00:02:19.257 CC lib/nvme/nvme_auth.o 00:02:19.257 CC lib/nvme/nvme_cuse.o 00:02:19.257 CC lib/nvme/nvme_vfio_user.o 00:02:19.257 CC lib/nvme/nvme_rdma.o 00:02:19.830 LIB libspdk_thread.a 00:02:19.830 SO libspdk_thread.so.11.0 00:02:19.830 SYMLINK libspdk_thread.so 00:02:20.404 CC lib/virtio/virtio.o 00:02:20.404 CC lib/virtio/virtio_vhost_user.o 00:02:20.404 CC lib/virtio/virtio_vfio_user.o 00:02:20.404 CC lib/virtio/virtio_pci.o 00:02:20.404 CC lib/accel/accel.o 00:02:20.404 CC lib/accel/accel_sw.o 00:02:20.404 CC lib/accel/accel_rpc.o 00:02:20.404 CC lib/init/json_config.o 00:02:20.404 CC lib/blob/blobstore.o 00:02:20.404 CC lib/init/subsystem.o 00:02:20.404 CC lib/blob/request.o 00:02:20.404 CC lib/blob/zeroes.o 00:02:20.404 CC lib/init/subsystem_rpc.o 00:02:20.404 CC lib/fsdev/fsdev.o 00:02:20.404 CC lib/init/rpc.o 00:02:20.404 CC lib/blob/blob_bs_dev.o 00:02:20.404 CC lib/vfu_tgt/tgt_endpoint.o 00:02:20.404 CC lib/fsdev/fsdev_io.o 00:02:20.404 CC lib/vfu_tgt/tgt_rpc.o 00:02:20.404 CC lib/fsdev/fsdev_rpc.o 00:02:20.665 LIB libspdk_init.a 00:02:20.665 SO libspdk_init.so.6.0 00:02:20.665 LIB libspdk_virtio.a 00:02:20.665 LIB libspdk_vfu_tgt.a 00:02:20.665 SYMLINK libspdk_init.so 00:02:20.665 SO libspdk_virtio.so.7.0 00:02:20.665 SO libspdk_vfu_tgt.so.3.0 00:02:20.665 SYMLINK libspdk_virtio.so 00:02:20.665 SYMLINK libspdk_vfu_tgt.so 00:02:20.927 LIB libspdk_fsdev.a 00:02:20.927 SO libspdk_fsdev.so.2.0 00:02:20.927 CC lib/event/app.o 00:02:20.927 CC lib/event/reactor.o 00:02:20.927 CC lib/event/log_rpc.o 00:02:20.927 CC lib/event/app_rpc.o 00:02:20.927 CC lib/event/scheduler_static.o 00:02:20.927 SYMLINK libspdk_fsdev.so 00:02:21.189 LIB libspdk_accel.a 00:02:21.452 SO libspdk_accel.so.16.0 00:02:21.452 LIB libspdk_nvme.a 00:02:21.452 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:21.452 SYMLINK libspdk_accel.so 00:02:21.452 LIB libspdk_event.a 00:02:21.452 SO libspdk_event.so.14.0 00:02:21.452 SO libspdk_nvme.so.15.0 00:02:21.713 SYMLINK libspdk_event.so 00:02:21.713 SYMLINK libspdk_nvme.so 00:02:21.713 CC lib/bdev/bdev.o 00:02:21.713 CC lib/bdev/bdev_rpc.o 00:02:21.713 CC lib/bdev/bdev_zone.o 00:02:21.713 CC lib/bdev/part.o 00:02:21.713 CC lib/bdev/scsi_nvme.o 00:02:21.975 LIB libspdk_fuse_dispatcher.a 00:02:21.975 SO libspdk_fuse_dispatcher.so.1.0 00:02:22.237 SYMLINK libspdk_fuse_dispatcher.so 00:02:23.181 LIB libspdk_blob.a 00:02:23.181 SO libspdk_blob.so.12.0 00:02:23.181 SYMLINK libspdk_blob.so 00:02:23.442 CC lib/blobfs/blobfs.o 00:02:23.442 CC lib/blobfs/tree.o 00:02:23.442 CC lib/lvol/lvol.o 00:02:24.383 LIB libspdk_bdev.a 00:02:24.383 SO libspdk_bdev.so.17.0 00:02:24.383 LIB libspdk_blobfs.a 00:02:24.383 SO libspdk_blobfs.so.11.0 00:02:24.383 SYMLINK libspdk_bdev.so 00:02:24.383 LIB libspdk_lvol.a 00:02:24.383 SYMLINK libspdk_blobfs.so 00:02:24.383 SO libspdk_lvol.so.11.0 00:02:24.383 SYMLINK libspdk_lvol.so 00:02:24.643 CC lib/ftl/ftl_core.o 00:02:24.643 CC lib/ftl/ftl_init.o 00:02:24.643 CC lib/ftl/ftl_layout.o 00:02:24.643 CC lib/ftl/ftl_debug.o 00:02:24.643 CC lib/nvmf/ctrlr.o 00:02:24.643 CC lib/ftl/ftl_io.o 00:02:24.643 CC lib/scsi/dev.o 00:02:24.643 CC lib/nvmf/ctrlr_discovery.o 00:02:24.643 CC lib/ftl/ftl_sb.o 00:02:24.643 CC lib/ftl/ftl_l2p.o 00:02:24.643 CC lib/nbd/nbd.o 00:02:24.643 CC lib/scsi/lun.o 00:02:24.643 CC lib/nvmf/ctrlr_bdev.o 00:02:24.643 CC lib/ftl/ftl_l2p_flat.o 00:02:24.643 CC lib/nbd/nbd_rpc.o 00:02:24.643 CC lib/scsi/port.o 00:02:24.643 CC lib/nvmf/subsystem.o 00:02:24.643 CC lib/ublk/ublk.o 00:02:24.643 CC lib/ftl/ftl_nv_cache.o 00:02:24.643 CC lib/scsi/scsi.o 00:02:24.643 CC lib/ftl/ftl_band.o 00:02:24.643 CC lib/nvmf/nvmf.o 00:02:24.643 CC lib/scsi/scsi_bdev.o 00:02:24.643 CC lib/ublk/ublk_rpc.o 00:02:24.643 CC lib/ftl/ftl_band_ops.o 00:02:24.643 CC lib/nvmf/nvmf_rpc.o 00:02:24.643 CC lib/scsi/scsi_pr.o 00:02:24.643 CC lib/ftl/ftl_writer.o 00:02:24.643 CC lib/nvmf/transport.o 00:02:24.643 CC lib/scsi/scsi_rpc.o 00:02:24.643 CC lib/nvmf/tcp.o 00:02:24.643 CC lib/ftl/ftl_rq.o 00:02:24.643 CC lib/scsi/task.o 00:02:24.643 CC lib/nvmf/stubs.o 00:02:24.643 CC lib/ftl/ftl_reloc.o 00:02:24.643 CC lib/nvmf/mdns_server.o 00:02:24.643 CC lib/ftl/ftl_l2p_cache.o 00:02:24.643 CC lib/nvmf/vfio_user.o 00:02:24.643 CC lib/ftl/ftl_p2l.o 00:02:24.643 CC lib/ftl/ftl_p2l_log.o 00:02:24.643 CC lib/nvmf/rdma.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.643 CC lib/nvmf/auth.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.643 CC lib/ftl/utils/ftl_conf.o 00:02:24.643 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.643 CC lib/ftl/utils/ftl_md.o 00:02:24.643 CC lib/ftl/utils/ftl_mempool.o 00:02:24.643 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.643 CC lib/ftl/utils/ftl_property.o 00:02:24.643 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.643 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.643 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.643 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.643 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.643 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.643 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:24.643 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.643 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.643 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.643 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.643 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:24.643 CC lib/ftl/base/ftl_base_dev.o 00:02:24.643 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:24.643 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.643 CC lib/ftl/ftl_trace.o 00:02:25.585 LIB libspdk_nbd.a 00:02:25.585 SO libspdk_nbd.so.7.0 00:02:25.585 LIB libspdk_scsi.a 00:02:25.585 SYMLINK libspdk_nbd.so 00:02:25.585 SO libspdk_scsi.so.9.0 00:02:25.585 SYMLINK libspdk_scsi.so 00:02:25.585 LIB libspdk_ublk.a 00:02:25.585 SO libspdk_ublk.so.3.0 00:02:25.846 SYMLINK libspdk_ublk.so 00:02:25.846 LIB libspdk_ftl.a 00:02:25.846 CC lib/vhost/vhost.o 00:02:25.846 CC lib/iscsi/conn.o 00:02:25.846 CC lib/vhost/vhost_rpc.o 00:02:25.846 CC lib/iscsi/init_grp.o 00:02:25.846 CC lib/vhost/vhost_scsi.o 00:02:25.846 CC lib/iscsi/iscsi.o 00:02:25.846 CC lib/vhost/vhost_blk.o 00:02:25.846 CC lib/vhost/rte_vhost_user.o 00:02:25.846 CC lib/iscsi/param.o 00:02:25.846 CC lib/iscsi/portal_grp.o 00:02:25.846 CC lib/iscsi/tgt_node.o 00:02:25.846 CC lib/iscsi/iscsi_subsystem.o 00:02:25.846 CC lib/iscsi/iscsi_rpc.o 00:02:25.846 CC lib/iscsi/task.o 00:02:26.108 SO libspdk_ftl.so.9.0 00:02:26.369 SYMLINK libspdk_ftl.so 00:02:26.942 LIB libspdk_nvmf.a 00:02:26.942 LIB libspdk_vhost.a 00:02:26.942 SO libspdk_nvmf.so.20.0 00:02:26.942 SO libspdk_vhost.so.8.0 00:02:27.203 SYMLINK libspdk_vhost.so 00:02:27.203 SYMLINK libspdk_nvmf.so 00:02:27.203 LIB libspdk_iscsi.a 00:02:27.203 SO libspdk_iscsi.so.8.0 00:02:27.464 SYMLINK libspdk_iscsi.so 00:02:28.039 CC module/env_dpdk/env_dpdk_rpc.o 00:02:28.039 CC module/vfu_device/vfu_virtio.o 00:02:28.039 CC module/vfu_device/vfu_virtio_blk.o 00:02:28.039 CC module/vfu_device/vfu_virtio_scsi.o 00:02:28.039 CC module/vfu_device/vfu_virtio_rpc.o 00:02:28.039 CC module/vfu_device/vfu_virtio_fs.o 00:02:28.039 LIB libspdk_env_dpdk_rpc.a 00:02:28.299 CC module/sock/posix/posix.o 00:02:28.299 CC module/keyring/file/keyring.o 00:02:28.299 CC module/keyring/file/keyring_rpc.o 00:02:28.299 CC module/blob/bdev/blob_bdev.o 00:02:28.299 CC module/keyring/linux/keyring.o 00:02:28.299 CC module/keyring/linux/keyring_rpc.o 00:02:28.299 CC module/accel/ioat/accel_ioat.o 00:02:28.299 CC module/accel/ioat/accel_ioat_rpc.o 00:02:28.299 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:28.299 CC module/accel/iaa/accel_iaa.o 00:02:28.299 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:28.299 CC module/scheduler/gscheduler/gscheduler.o 00:02:28.299 CC module/accel/iaa/accel_iaa_rpc.o 00:02:28.299 CC module/fsdev/aio/fsdev_aio.o 00:02:28.299 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:28.299 CC module/fsdev/aio/linux_aio_mgr.o 00:02:28.299 CC module/accel/dsa/accel_dsa_rpc.o 00:02:28.299 CC module/accel/dsa/accel_dsa.o 00:02:28.299 CC module/accel/error/accel_error.o 00:02:28.299 CC module/accel/error/accel_error_rpc.o 00:02:28.299 SO libspdk_env_dpdk_rpc.so.6.0 00:02:28.299 SYMLINK libspdk_env_dpdk_rpc.so 00:02:28.300 LIB libspdk_keyring_linux.a 00:02:28.300 LIB libspdk_keyring_file.a 00:02:28.300 LIB libspdk_scheduler_gscheduler.a 00:02:28.300 SO libspdk_keyring_linux.so.1.0 00:02:28.300 LIB libspdk_scheduler_dpdk_governor.a 00:02:28.300 LIB libspdk_scheduler_dynamic.a 00:02:28.300 SO libspdk_keyring_file.so.2.0 00:02:28.300 LIB libspdk_accel_ioat.a 00:02:28.300 SO libspdk_scheduler_gscheduler.so.4.0 00:02:28.560 LIB libspdk_accel_iaa.a 00:02:28.560 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:28.560 LIB libspdk_accel_error.a 00:02:28.560 SO libspdk_scheduler_dynamic.so.4.0 00:02:28.560 SYMLINK libspdk_keyring_linux.so 00:02:28.560 SO libspdk_accel_ioat.so.6.0 00:02:28.560 SYMLINK libspdk_scheduler_gscheduler.so 00:02:28.560 LIB libspdk_blob_bdev.a 00:02:28.560 SO libspdk_accel_iaa.so.3.0 00:02:28.560 SYMLINK libspdk_keyring_file.so 00:02:28.560 SO libspdk_accel_error.so.2.0 00:02:28.560 LIB libspdk_accel_dsa.a 00:02:28.560 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:28.560 SYMLINK libspdk_scheduler_dynamic.so 00:02:28.560 SO libspdk_blob_bdev.so.12.0 00:02:28.560 SYMLINK libspdk_accel_ioat.so 00:02:28.560 SO libspdk_accel_dsa.so.5.0 00:02:28.560 SYMLINK libspdk_accel_iaa.so 00:02:28.560 SYMLINK libspdk_accel_error.so 00:02:28.560 LIB libspdk_vfu_device.a 00:02:28.560 SYMLINK libspdk_blob_bdev.so 00:02:28.560 SYMLINK libspdk_accel_dsa.so 00:02:28.560 SO libspdk_vfu_device.so.3.0 00:02:28.822 SYMLINK libspdk_vfu_device.so 00:02:28.822 LIB libspdk_fsdev_aio.a 00:02:28.822 LIB libspdk_sock_posix.a 00:02:28.822 SO libspdk_fsdev_aio.so.1.0 00:02:28.822 SO libspdk_sock_posix.so.6.0 00:02:29.084 SYMLINK libspdk_fsdev_aio.so 00:02:29.084 SYMLINK libspdk_sock_posix.so 00:02:29.084 CC module/bdev/error/vbdev_error.o 00:02:29.084 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.084 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.084 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.084 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.084 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.084 CC module/bdev/aio/bdev_aio.o 00:02:29.084 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.084 CC module/bdev/gpt/gpt.o 00:02:29.084 CC module/bdev/delay/vbdev_delay.o 00:02:29.084 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.084 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.084 CC module/bdev/ftl/bdev_ftl.o 00:02:29.084 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.084 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.084 CC module/bdev/null/bdev_null.o 00:02:29.084 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.084 CC module/bdev/null/bdev_null_rpc.o 00:02:29.084 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:29.084 CC module/bdev/nvme/bdev_nvme.o 00:02:29.084 CC module/bdev/raid/bdev_raid.o 00:02:29.084 CC module/bdev/split/vbdev_split.o 00:02:29.084 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.084 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.084 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.084 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.084 CC module/bdev/malloc/bdev_malloc.o 00:02:29.084 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.084 CC module/bdev/nvme/nvme_rpc.o 00:02:29.084 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.084 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.084 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.084 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.084 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.084 CC module/bdev/raid/raid0.o 00:02:29.084 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.084 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.084 CC module/bdev/raid/raid1.o 00:02:29.084 CC module/bdev/nvme/vbdev_opal.o 00:02:29.084 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.084 CC module/bdev/raid/concat.o 00:02:29.084 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.347 LIB libspdk_blobfs_bdev.a 00:02:29.608 SO libspdk_blobfs_bdev.so.6.0 00:02:29.608 LIB libspdk_bdev_split.a 00:02:29.608 LIB libspdk_bdev_gpt.a 00:02:29.608 LIB libspdk_bdev_error.a 00:02:29.608 LIB libspdk_bdev_null.a 00:02:29.608 SO libspdk_bdev_split.so.6.0 00:02:29.608 SO libspdk_bdev_gpt.so.6.0 00:02:29.608 SYMLINK libspdk_blobfs_bdev.so 00:02:29.608 SO libspdk_bdev_error.so.6.0 00:02:29.608 LIB libspdk_bdev_ftl.a 00:02:29.608 SO libspdk_bdev_null.so.6.0 00:02:29.608 LIB libspdk_bdev_aio.a 00:02:29.608 LIB libspdk_bdev_passthru.a 00:02:29.608 SYMLINK libspdk_bdev_split.so 00:02:29.608 SYMLINK libspdk_bdev_gpt.so 00:02:29.608 SO libspdk_bdev_ftl.so.6.0 00:02:29.608 SO libspdk_bdev_aio.so.6.0 00:02:29.608 LIB libspdk_bdev_zone_block.a 00:02:29.608 SO libspdk_bdev_passthru.so.6.0 00:02:29.608 LIB libspdk_bdev_iscsi.a 00:02:29.608 LIB libspdk_bdev_delay.a 00:02:29.608 SYMLINK libspdk_bdev_error.so 00:02:29.608 SYMLINK libspdk_bdev_null.so 00:02:29.608 LIB libspdk_bdev_malloc.a 00:02:29.609 SO libspdk_bdev_iscsi.so.6.0 00:02:29.609 SO libspdk_bdev_zone_block.so.6.0 00:02:29.609 SO libspdk_bdev_delay.so.6.0 00:02:29.609 SYMLINK libspdk_bdev_aio.so 00:02:29.609 SYMLINK libspdk_bdev_ftl.so 00:02:29.609 SO libspdk_bdev_malloc.so.6.0 00:02:29.609 SYMLINK libspdk_bdev_passthru.so 00:02:29.870 LIB libspdk_bdev_lvol.a 00:02:29.870 SYMLINK libspdk_bdev_zone_block.so 00:02:29.870 SYMLINK libspdk_bdev_iscsi.so 00:02:29.870 SYMLINK libspdk_bdev_delay.so 00:02:29.870 SO libspdk_bdev_lvol.so.6.0 00:02:29.870 LIB libspdk_bdev_virtio.a 00:02:29.870 SYMLINK libspdk_bdev_malloc.so 00:02:29.870 SO libspdk_bdev_virtio.so.6.0 00:02:29.870 SYMLINK libspdk_bdev_lvol.so 00:02:29.870 SYMLINK libspdk_bdev_virtio.so 00:02:30.131 LIB libspdk_bdev_raid.a 00:02:30.393 SO libspdk_bdev_raid.so.6.0 00:02:30.393 SYMLINK libspdk_bdev_raid.so 00:02:31.807 LIB libspdk_bdev_nvme.a 00:02:31.807 SO libspdk_bdev_nvme.so.7.1 00:02:31.807 SYMLINK libspdk_bdev_nvme.so 00:02:32.457 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.457 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.457 CC module/event/subsystems/vmd/vmd.o 00:02:32.457 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.457 CC module/event/subsystems/keyring/keyring.o 00:02:32.457 CC module/event/subsystems/sock/sock.o 00:02:32.457 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.457 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.457 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:32.457 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.731 LIB libspdk_event_vhost_blk.a 00:02:32.731 LIB libspdk_event_sock.a 00:02:32.731 LIB libspdk_event_keyring.a 00:02:32.731 LIB libspdk_event_vmd.a 00:02:32.731 LIB libspdk_event_iobuf.a 00:02:32.731 LIB libspdk_event_scheduler.a 00:02:32.731 LIB libspdk_event_fsdev.a 00:02:32.731 LIB libspdk_event_vfu_tgt.a 00:02:32.731 SO libspdk_event_keyring.so.1.0 00:02:32.731 SO libspdk_event_sock.so.5.0 00:02:32.731 SO libspdk_event_vhost_blk.so.3.0 00:02:32.731 SO libspdk_event_iobuf.so.3.0 00:02:32.731 SO libspdk_event_vmd.so.6.0 00:02:32.731 SO libspdk_event_scheduler.so.4.0 00:02:32.731 SO libspdk_event_fsdev.so.1.0 00:02:32.731 SO libspdk_event_vfu_tgt.so.3.0 00:02:32.731 SYMLINK libspdk_event_keyring.so 00:02:32.731 SYMLINK libspdk_event_sock.so 00:02:32.731 SYMLINK libspdk_event_vhost_blk.so 00:02:32.731 SYMLINK libspdk_event_iobuf.so 00:02:32.731 SYMLINK libspdk_event_scheduler.so 00:02:32.731 SYMLINK libspdk_event_vmd.so 00:02:32.731 SYMLINK libspdk_event_fsdev.so 00:02:32.731 SYMLINK libspdk_event_vfu_tgt.so 00:02:32.992 CC module/event/subsystems/accel/accel.o 00:02:33.253 LIB libspdk_event_accel.a 00:02:33.253 SO libspdk_event_accel.so.6.0 00:02:33.514 SYMLINK libspdk_event_accel.so 00:02:33.776 CC module/event/subsystems/bdev/bdev.o 00:02:34.037 LIB libspdk_event_bdev.a 00:02:34.037 SO libspdk_event_bdev.so.6.0 00:02:34.037 SYMLINK libspdk_event_bdev.so 00:02:34.299 CC module/event/subsystems/nbd/nbd.o 00:02:34.299 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.299 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.299 CC module/event/subsystems/ublk/ublk.o 00:02:34.299 CC module/event/subsystems/scsi/scsi.o 00:02:34.560 LIB libspdk_event_nbd.a 00:02:34.560 LIB libspdk_event_ublk.a 00:02:34.560 LIB libspdk_event_scsi.a 00:02:34.560 SO libspdk_event_nbd.so.6.0 00:02:34.560 SO libspdk_event_ublk.so.3.0 00:02:34.560 SO libspdk_event_scsi.so.6.0 00:02:34.560 LIB libspdk_event_nvmf.a 00:02:34.560 SYMLINK libspdk_event_nbd.so 00:02:34.821 SYMLINK libspdk_event_ublk.so 00:02:34.821 SO libspdk_event_nvmf.so.6.0 00:02:34.821 SYMLINK libspdk_event_scsi.so 00:02:34.821 SYMLINK libspdk_event_nvmf.so 00:02:35.082 CC module/event/subsystems/iscsi/iscsi.o 00:02:35.082 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:35.343 LIB libspdk_event_vhost_scsi.a 00:02:35.343 LIB libspdk_event_iscsi.a 00:02:35.343 SO libspdk_event_vhost_scsi.so.3.0 00:02:35.343 SO libspdk_event_iscsi.so.6.0 00:02:35.343 SYMLINK libspdk_event_vhost_scsi.so 00:02:35.343 SYMLINK libspdk_event_iscsi.so 00:02:35.605 SO libspdk.so.6.0 00:02:35.605 SYMLINK libspdk.so 00:02:35.866 TEST_HEADER include/spdk/accel.h 00:02:35.866 TEST_HEADER include/spdk/accel_module.h 00:02:35.866 TEST_HEADER include/spdk/assert.h 00:02:35.866 TEST_HEADER include/spdk/base64.h 00:02:35.866 TEST_HEADER include/spdk/barrier.h 00:02:35.866 CXX app/trace/trace.o 00:02:35.866 TEST_HEADER include/spdk/bdev_module.h 00:02:35.866 TEST_HEADER include/spdk/bdev.h 00:02:35.866 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.866 TEST_HEADER include/spdk/bit_array.h 00:02:35.866 CC app/spdk_top/spdk_top.o 00:02:35.866 TEST_HEADER include/spdk/bit_pool.h 00:02:35.866 CC app/trace_record/trace_record.o 00:02:35.866 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.866 CC app/spdk_lspci/spdk_lspci.o 00:02:35.866 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.866 CC app/spdk_nvme_perf/perf.o 00:02:35.866 TEST_HEADER include/spdk/blobfs.h 00:02:35.866 TEST_HEADER include/spdk/blob.h 00:02:35.866 TEST_HEADER include/spdk/conf.h 00:02:35.866 TEST_HEADER include/spdk/config.h 00:02:35.866 CC test/rpc_client/rpc_client_test.o 00:02:35.866 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.866 CC app/spdk_nvme_identify/identify.o 00:02:35.866 TEST_HEADER include/spdk/cpuset.h 00:02:35.866 TEST_HEADER include/spdk/crc16.h 00:02:35.866 TEST_HEADER include/spdk/crc32.h 00:02:35.866 TEST_HEADER include/spdk/crc64.h 00:02:35.866 TEST_HEADER include/spdk/dif.h 00:02:35.866 TEST_HEADER include/spdk/dma.h 00:02:36.130 TEST_HEADER include/spdk/endian.h 00:02:36.130 TEST_HEADER include/spdk/env_dpdk.h 00:02:36.130 TEST_HEADER include/spdk/env.h 00:02:36.130 TEST_HEADER include/spdk/event.h 00:02:36.130 TEST_HEADER include/spdk/file.h 00:02:36.130 TEST_HEADER include/spdk/fd.h 00:02:36.130 TEST_HEADER include/spdk/fd_group.h 00:02:36.130 TEST_HEADER include/spdk/fsdev.h 00:02:36.130 TEST_HEADER include/spdk/fsdev_module.h 00:02:36.130 TEST_HEADER include/spdk/ftl.h 00:02:36.130 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:36.130 TEST_HEADER include/spdk/gpt_spec.h 00:02:36.130 TEST_HEADER include/spdk/hexlify.h 00:02:36.130 TEST_HEADER include/spdk/idxd.h 00:02:36.130 TEST_HEADER include/spdk/histogram_data.h 00:02:36.130 TEST_HEADER include/spdk/idxd_spec.h 00:02:36.130 TEST_HEADER include/spdk/init.h 00:02:36.130 TEST_HEADER include/spdk/ioat.h 00:02:36.130 TEST_HEADER include/spdk/ioat_spec.h 00:02:36.131 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:36.131 TEST_HEADER include/spdk/iscsi_spec.h 00:02:36.131 TEST_HEADER include/spdk/json.h 00:02:36.131 TEST_HEADER include/spdk/jsonrpc.h 00:02:36.131 TEST_HEADER include/spdk/keyring.h 00:02:36.131 TEST_HEADER include/spdk/likely.h 00:02:36.131 TEST_HEADER include/spdk/keyring_module.h 00:02:36.131 TEST_HEADER include/spdk/log.h 00:02:36.131 TEST_HEADER include/spdk/lvol.h 00:02:36.131 TEST_HEADER include/spdk/md5.h 00:02:36.131 TEST_HEADER include/spdk/memory.h 00:02:36.131 TEST_HEADER include/spdk/mmio.h 00:02:36.131 TEST_HEADER include/spdk/nbd.h 00:02:36.131 TEST_HEADER include/spdk/net.h 00:02:36.131 CC app/iscsi_tgt/iscsi_tgt.o 00:02:36.131 TEST_HEADER include/spdk/notify.h 00:02:36.131 CC app/nvmf_tgt/nvmf_main.o 00:02:36.131 TEST_HEADER include/spdk/nvme.h 00:02:36.131 TEST_HEADER include/spdk/nvme_intel.h 00:02:36.131 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:36.131 CC app/spdk_dd/spdk_dd.o 00:02:36.131 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:36.131 TEST_HEADER include/spdk/nvme_zns.h 00:02:36.131 TEST_HEADER include/spdk/nvme_spec.h 00:02:36.131 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:36.131 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:36.131 TEST_HEADER include/spdk/nvmf.h 00:02:36.131 TEST_HEADER include/spdk/nvmf_spec.h 00:02:36.131 TEST_HEADER include/spdk/nvmf_transport.h 00:02:36.131 TEST_HEADER include/spdk/opal_spec.h 00:02:36.131 TEST_HEADER include/spdk/opal.h 00:02:36.131 TEST_HEADER include/spdk/queue.h 00:02:36.131 TEST_HEADER include/spdk/pci_ids.h 00:02:36.131 TEST_HEADER include/spdk/pipe.h 00:02:36.131 CC app/spdk_tgt/spdk_tgt.o 00:02:36.131 TEST_HEADER include/spdk/reduce.h 00:02:36.131 TEST_HEADER include/spdk/scheduler.h 00:02:36.131 TEST_HEADER include/spdk/rpc.h 00:02:36.131 TEST_HEADER include/spdk/scsi_spec.h 00:02:36.131 TEST_HEADER include/spdk/scsi.h 00:02:36.131 TEST_HEADER include/spdk/sock.h 00:02:36.131 TEST_HEADER include/spdk/stdinc.h 00:02:36.131 TEST_HEADER include/spdk/string.h 00:02:36.131 TEST_HEADER include/spdk/thread.h 00:02:36.131 TEST_HEADER include/spdk/trace_parser.h 00:02:36.131 TEST_HEADER include/spdk/trace.h 00:02:36.131 TEST_HEADER include/spdk/tree.h 00:02:36.131 TEST_HEADER include/spdk/ublk.h 00:02:36.131 TEST_HEADER include/spdk/uuid.h 00:02:36.131 TEST_HEADER include/spdk/util.h 00:02:36.131 TEST_HEADER include/spdk/version.h 00:02:36.131 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:36.131 TEST_HEADER include/spdk/vhost.h 00:02:36.131 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:36.131 TEST_HEADER include/spdk/vmd.h 00:02:36.131 TEST_HEADER include/spdk/xor.h 00:02:36.131 TEST_HEADER include/spdk/zipf.h 00:02:36.131 CXX test/cpp_headers/accel.o 00:02:36.131 CXX test/cpp_headers/accel_module.o 00:02:36.131 CXX test/cpp_headers/assert.o 00:02:36.131 CXX test/cpp_headers/barrier.o 00:02:36.131 CXX test/cpp_headers/base64.o 00:02:36.131 CXX test/cpp_headers/bdev.o 00:02:36.131 CXX test/cpp_headers/bdev_module.o 00:02:36.131 CXX test/cpp_headers/bdev_zone.o 00:02:36.131 CXX test/cpp_headers/bit_array.o 00:02:36.131 CXX test/cpp_headers/bit_pool.o 00:02:36.131 CXX test/cpp_headers/blob_bdev.o 00:02:36.131 CXX test/cpp_headers/blobfs_bdev.o 00:02:36.131 CXX test/cpp_headers/blobfs.o 00:02:36.131 CXX test/cpp_headers/blob.o 00:02:36.131 CXX test/cpp_headers/conf.o 00:02:36.131 CXX test/cpp_headers/config.o 00:02:36.131 CXX test/cpp_headers/crc16.o 00:02:36.131 CXX test/cpp_headers/cpuset.o 00:02:36.131 CXX test/cpp_headers/crc32.o 00:02:36.131 CXX test/cpp_headers/dif.o 00:02:36.131 CXX test/cpp_headers/crc64.o 00:02:36.131 CXX test/cpp_headers/dma.o 00:02:36.131 CXX test/cpp_headers/endian.o 00:02:36.131 CXX test/cpp_headers/env_dpdk.o 00:02:36.131 CXX test/cpp_headers/env.o 00:02:36.131 CXX test/cpp_headers/fd.o 00:02:36.131 CXX test/cpp_headers/event.o 00:02:36.131 CXX test/cpp_headers/fd_group.o 00:02:36.131 CXX test/cpp_headers/fsdev.o 00:02:36.131 CXX test/cpp_headers/file.o 00:02:36.131 CXX test/cpp_headers/ftl.o 00:02:36.131 CXX test/cpp_headers/fsdev_module.o 00:02:36.131 CXX test/cpp_headers/fuse_dispatcher.o 00:02:36.131 CXX test/cpp_headers/gpt_spec.o 00:02:36.131 CXX test/cpp_headers/hexlify.o 00:02:36.131 CXX test/cpp_headers/idxd.o 00:02:36.131 CXX test/cpp_headers/histogram_data.o 00:02:36.131 CXX test/cpp_headers/idxd_spec.o 00:02:36.131 CXX test/cpp_headers/init.o 00:02:36.131 CXX test/cpp_headers/ioat.o 00:02:36.131 CXX test/cpp_headers/ioat_spec.o 00:02:36.131 CXX test/cpp_headers/json.o 00:02:36.131 CXX test/cpp_headers/jsonrpc.o 00:02:36.131 CXX test/cpp_headers/iscsi_spec.o 00:02:36.131 CXX test/cpp_headers/likely.o 00:02:36.131 CXX test/cpp_headers/keyring.o 00:02:36.131 CXX test/cpp_headers/keyring_module.o 00:02:36.131 CXX test/cpp_headers/log.o 00:02:36.131 CXX test/cpp_headers/lvol.o 00:02:36.131 CXX test/cpp_headers/memory.o 00:02:36.131 CXX test/cpp_headers/mmio.o 00:02:36.131 CXX test/cpp_headers/md5.o 00:02:36.131 CXX test/cpp_headers/nbd.o 00:02:36.131 CXX test/cpp_headers/net.o 00:02:36.131 CXX test/cpp_headers/nvme.o 00:02:36.131 CXX test/cpp_headers/nvme_intel.o 00:02:36.131 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:36.131 CXX test/cpp_headers/notify.o 00:02:36.131 CXX test/cpp_headers/nvme_spec.o 00:02:36.131 CXX test/cpp_headers/nvme_ocssd.o 00:02:36.131 CXX test/cpp_headers/nvme_zns.o 00:02:36.131 CXX test/cpp_headers/nvmf_cmd.o 00:02:36.131 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:36.131 CXX test/cpp_headers/opal.o 00:02:36.131 CXX test/cpp_headers/nvmf.o 00:02:36.131 CXX test/cpp_headers/opal_spec.o 00:02:36.131 CXX test/cpp_headers/nvmf_spec.o 00:02:36.131 CC examples/util/zipf/zipf.o 00:02:36.131 CXX test/cpp_headers/nvmf_transport.o 00:02:36.131 CC test/app/jsoncat/jsoncat.o 00:02:36.131 CXX test/cpp_headers/pipe.o 00:02:36.131 CXX test/cpp_headers/reduce.o 00:02:36.131 CXX test/cpp_headers/pci_ids.o 00:02:36.131 CXX test/cpp_headers/queue.o 00:02:36.131 CXX test/cpp_headers/rpc.o 00:02:36.131 CXX test/cpp_headers/scsi.o 00:02:36.131 CXX test/cpp_headers/scheduler.o 00:02:36.131 CXX test/cpp_headers/sock.o 00:02:36.131 CC examples/ioat/perf/perf.o 00:02:36.131 CXX test/cpp_headers/scsi_spec.o 00:02:36.131 CC examples/ioat/verify/verify.o 00:02:36.131 CXX test/cpp_headers/stdinc.o 00:02:36.131 CXX test/cpp_headers/thread.o 00:02:36.131 CXX test/cpp_headers/string.o 00:02:36.131 CXX test/cpp_headers/trace_parser.o 00:02:36.131 CXX test/cpp_headers/trace.o 00:02:36.399 CC test/app/histogram_perf/histogram_perf.o 00:02:36.399 CXX test/cpp_headers/tree.o 00:02:36.399 LINK spdk_lspci 00:02:36.399 CXX test/cpp_headers/ublk.o 00:02:36.399 CXX test/cpp_headers/uuid.o 00:02:36.399 CXX test/cpp_headers/version.o 00:02:36.399 CXX test/cpp_headers/util.o 00:02:36.399 CC test/thread/poller_perf/poller_perf.o 00:02:36.399 CXX test/cpp_headers/vfio_user_pci.o 00:02:36.399 CC test/app/stub/stub.o 00:02:36.399 CXX test/cpp_headers/vfio_user_spec.o 00:02:36.399 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:36.399 CXX test/cpp_headers/vhost.o 00:02:36.399 CXX test/cpp_headers/xor.o 00:02:36.399 CXX test/cpp_headers/vmd.o 00:02:36.399 CXX test/cpp_headers/zipf.o 00:02:36.399 CC test/env/vtophys/vtophys.o 00:02:36.399 CC test/env/memory/memory_ut.o 00:02:36.399 CC app/fio/nvme/fio_plugin.o 00:02:36.399 CC test/dma/test_dma/test_dma.o 00:02:36.399 CC test/app/bdev_svc/bdev_svc.o 00:02:36.399 CC app/fio/bdev/fio_plugin.o 00:02:36.399 CC test/env/pci/pci_ut.o 00:02:36.399 LINK rpc_client_test 00:02:36.399 LINK spdk_nvme_discover 00:02:36.669 LINK interrupt_tgt 00:02:36.669 LINK spdk_trace_record 00:02:36.669 LINK nvmf_tgt 00:02:36.932 LINK spdk_tgt 00:02:36.932 LINK zipf 00:02:36.932 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.932 LINK iscsi_tgt 00:02:36.932 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.932 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:36.932 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.932 CC test/env/mem_callbacks/mem_callbacks.o 00:02:36.932 LINK jsoncat 00:02:36.932 LINK bdev_svc 00:02:36.932 LINK vtophys 00:02:37.194 LINK spdk_dd 00:02:37.194 LINK spdk_trace 00:02:37.194 LINK histogram_perf 00:02:37.194 LINK poller_perf 00:02:37.454 LINK env_dpdk_post_init 00:02:37.454 LINK verify 00:02:37.454 LINK ioat_perf 00:02:37.454 LINK stub 00:02:37.454 LINK spdk_top 00:02:37.717 CC app/vhost/vhost.o 00:02:37.717 CC examples/idxd/perf/perf.o 00:02:37.717 LINK nvme_fuzz 00:02:37.717 LINK test_dma 00:02:37.717 CC examples/sock/hello_world/hello_sock.o 00:02:37.717 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.717 CC examples/vmd/led/led.o 00:02:37.717 CC examples/thread/thread/thread_ex.o 00:02:37.717 LINK vhost_fuzz 00:02:37.717 LINK spdk_bdev 00:02:37.717 LINK spdk_nvme_perf 00:02:37.717 LINK pci_ut 00:02:37.717 LINK spdk_nvme 00:02:37.717 LINK vhost 00:02:37.717 LINK lsvmd 00:02:37.978 LINK mem_callbacks 00:02:37.978 CC test/event/event_perf/event_perf.o 00:02:37.978 LINK led 00:02:37.978 CC test/event/reactor/reactor.o 00:02:37.978 CC test/event/reactor_perf/reactor_perf.o 00:02:37.978 CC test/event/app_repeat/app_repeat.o 00:02:37.978 LINK hello_sock 00:02:37.978 LINK spdk_nvme_identify 00:02:37.978 CC test/event/scheduler/scheduler.o 00:02:37.978 LINK idxd_perf 00:02:37.978 LINK thread 00:02:37.978 LINK event_perf 00:02:37.978 LINK reactor_perf 00:02:37.978 LINK reactor 00:02:37.978 LINK app_repeat 00:02:38.239 LINK scheduler 00:02:38.239 CC test/nvme/aer/aer.o 00:02:38.239 CC test/nvme/e2edp/nvme_dp.o 00:02:38.239 CC test/nvme/reset/reset.o 00:02:38.239 CC test/nvme/connect_stress/connect_stress.o 00:02:38.239 CC test/nvme/compliance/nvme_compliance.o 00:02:38.239 CC test/blobfs/mkfs/mkfs.o 00:02:38.239 CC test/nvme/err_injection/err_injection.o 00:02:38.239 CC test/nvme/reserve/reserve.o 00:02:38.239 CC test/nvme/boot_partition/boot_partition.o 00:02:38.239 CC test/nvme/sgl/sgl.o 00:02:38.239 CC test/nvme/cuse/cuse.o 00:02:38.239 CC test/nvme/overhead/overhead.o 00:02:38.239 CC test/nvme/startup/startup.o 00:02:38.239 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.239 CC test/nvme/fdp/fdp.o 00:02:38.239 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.239 CC test/nvme/simple_copy/simple_copy.o 00:02:38.239 CC test/accel/dif/dif.o 00:02:38.239 LINK memory_ut 00:02:38.513 CC test/lvol/esnap/esnap.o 00:02:38.513 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:38.513 CC examples/nvme/hotplug/hotplug.o 00:02:38.513 CC examples/nvme/arbitration/arbitration.o 00:02:38.513 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:38.513 CC examples/nvme/hello_world/hello_world.o 00:02:38.513 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:38.513 CC examples/nvme/reconnect/reconnect.o 00:02:38.513 CC examples/nvme/abort/abort.o 00:02:38.513 LINK connect_stress 00:02:38.513 LINK startup 00:02:38.513 LINK boot_partition 00:02:38.513 LINK err_injection 00:02:38.513 LINK doorbell_aers 00:02:38.513 LINK reserve 00:02:38.513 LINK mkfs 00:02:38.513 LINK fused_ordering 00:02:38.513 LINK simple_copy 00:02:38.513 LINK reset 00:02:38.513 LINK nvme_dp 00:02:38.513 LINK sgl 00:02:38.513 LINK overhead 00:02:38.513 LINK aer 00:02:38.513 CC examples/accel/perf/accel_perf.o 00:02:38.513 LINK iscsi_fuzz 00:02:38.778 LINK nvme_compliance 00:02:38.778 CC examples/blob/cli/blobcli.o 00:02:38.778 LINK fdp 00:02:38.778 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:38.778 CC examples/blob/hello_world/hello_blob.o 00:02:38.778 LINK cmb_copy 00:02:38.778 LINK pmr_persistence 00:02:38.778 LINK hello_world 00:02:38.778 LINK hotplug 00:02:38.778 LINK reconnect 00:02:38.778 LINK arbitration 00:02:38.778 LINK abort 00:02:39.040 LINK hello_blob 00:02:39.040 LINK dif 00:02:39.040 LINK nvme_manage 00:02:39.040 LINK hello_fsdev 00:02:39.040 LINK accel_perf 00:02:39.302 LINK blobcli 00:02:39.564 LINK cuse 00:02:39.564 CC test/bdev/bdevio/bdevio.o 00:02:39.564 CC examples/bdev/hello_world/hello_bdev.o 00:02:39.825 CC examples/bdev/bdevperf/bdevperf.o 00:02:39.825 LINK hello_bdev 00:02:40.087 LINK bdevio 00:02:40.349 LINK bdevperf 00:02:41.295 CC examples/nvmf/nvmf/nvmf.o 00:02:41.295 LINK nvmf 00:02:43.214 LINK esnap 00:02:43.214 00:02:43.214 real 0m56.100s 00:02:43.214 user 8m8.046s 00:02:43.214 sys 5m36.459s 00:02:43.214 19:40:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:43.214 19:40:43 make -- common/autotest_common.sh@10 -- $ set +x 00:02:43.214 ************************************ 00:02:43.214 END TEST make 00:02:43.214 ************************************ 00:02:43.214 19:40:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:43.214 19:40:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:43.214 19:40:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:43.214 19:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.214 19:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:43.214 19:40:43 -- pm/common@44 -- $ pid=3295905 00:02:43.214 19:40:43 -- pm/common@50 -- $ kill -TERM 3295905 00:02:43.214 19:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.214 19:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:43.214 19:40:43 -- pm/common@44 -- $ pid=3295906 00:02:43.214 19:40:43 -- pm/common@50 -- $ kill -TERM 3295906 00:02:43.214 19:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.214 19:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:43.214 19:40:43 -- pm/common@44 -- $ pid=3295908 00:02:43.214 19:40:43 -- pm/common@50 -- $ kill -TERM 3295908 00:02:43.215 19:40:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.215 19:40:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:43.215 19:40:43 -- pm/common@44 -- $ pid=3295931 00:02:43.215 19:40:43 -- pm/common@50 -- $ sudo -E kill -TERM 3295931 00:02:43.215 19:40:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:43.215 19:40:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:43.477 19:40:44 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:43.477 19:40:44 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:43.477 19:40:44 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:43.477 19:40:44 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:43.477 19:40:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:43.477 19:40:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:43.477 19:40:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:43.477 19:40:44 -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.477 19:40:44 -- scripts/common.sh@336 -- # read -ra ver1 00:02:43.477 19:40:44 -- scripts/common.sh@337 -- # IFS=.-: 00:02:43.477 19:40:44 -- scripts/common.sh@337 -- # read -ra ver2 00:02:43.477 19:40:44 -- scripts/common.sh@338 -- # local 'op=<' 00:02:43.477 19:40:44 -- scripts/common.sh@340 -- # ver1_l=2 00:02:43.477 19:40:44 -- scripts/common.sh@341 -- # ver2_l=1 00:02:43.477 19:40:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:43.477 19:40:44 -- scripts/common.sh@344 -- # case "$op" in 00:02:43.477 19:40:44 -- scripts/common.sh@345 -- # : 1 00:02:43.477 19:40:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:43.477 19:40:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.477 19:40:44 -- scripts/common.sh@365 -- # decimal 1 00:02:43.477 19:40:44 -- scripts/common.sh@353 -- # local d=1 00:02:43.477 19:40:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.477 19:40:44 -- scripts/common.sh@355 -- # echo 1 00:02:43.477 19:40:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:43.477 19:40:44 -- scripts/common.sh@366 -- # decimal 2 00:02:43.477 19:40:44 -- scripts/common.sh@353 -- # local d=2 00:02:43.477 19:40:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.477 19:40:44 -- scripts/common.sh@355 -- # echo 2 00:02:43.477 19:40:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:43.477 19:40:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:43.477 19:40:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:43.477 19:40:44 -- scripts/common.sh@368 -- # return 0 00:02:43.477 19:40:44 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.477 19:40:44 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:43.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.477 --rc genhtml_branch_coverage=1 00:02:43.477 --rc genhtml_function_coverage=1 00:02:43.477 --rc genhtml_legend=1 00:02:43.477 --rc geninfo_all_blocks=1 00:02:43.477 --rc geninfo_unexecuted_blocks=1 00:02:43.477 00:02:43.477 ' 00:02:43.477 19:40:44 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:43.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.477 --rc genhtml_branch_coverage=1 00:02:43.477 --rc genhtml_function_coverage=1 00:02:43.477 --rc genhtml_legend=1 00:02:43.477 --rc geninfo_all_blocks=1 00:02:43.477 --rc geninfo_unexecuted_blocks=1 00:02:43.477 00:02:43.477 ' 00:02:43.477 19:40:44 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:43.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.477 --rc genhtml_branch_coverage=1 00:02:43.477 --rc genhtml_function_coverage=1 00:02:43.477 --rc genhtml_legend=1 00:02:43.477 --rc geninfo_all_blocks=1 00:02:43.477 --rc geninfo_unexecuted_blocks=1 00:02:43.477 00:02:43.477 ' 00:02:43.477 19:40:44 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:43.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.477 --rc genhtml_branch_coverage=1 00:02:43.477 --rc genhtml_function_coverage=1 00:02:43.477 --rc genhtml_legend=1 00:02:43.477 --rc geninfo_all_blocks=1 00:02:43.477 --rc geninfo_unexecuted_blocks=1 00:02:43.477 00:02:43.477 ' 00:02:43.477 19:40:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:43.477 19:40:44 -- nvmf/common.sh@7 -- # uname -s 00:02:43.477 19:40:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.477 19:40:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.477 19:40:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.477 19:40:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.477 19:40:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.477 19:40:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.477 19:40:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.477 19:40:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.477 19:40:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.477 19:40:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.477 19:40:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:43.477 19:40:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:43.477 19:40:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.477 19:40:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.477 19:40:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:43.477 19:40:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:43.477 19:40:44 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:43.477 19:40:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:43.477 19:40:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.477 19:40:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.477 19:40:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.477 19:40:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.477 19:40:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.477 19:40:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.477 19:40:44 -- paths/export.sh@5 -- # export PATH 00:02:43.477 19:40:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.477 19:40:44 -- nvmf/common.sh@51 -- # : 0 00:02:43.477 19:40:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:43.477 19:40:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:43.477 19:40:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:43.477 19:40:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.477 19:40:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.477 19:40:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:43.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:43.477 19:40:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:43.477 19:40:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:43.477 19:40:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:43.477 19:40:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.477 19:40:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.477 19:40:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.477 19:40:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.477 19:40:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.477 19:40:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.477 19:40:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:43.477 19:40:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.477 19:40:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.477 19:40:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.477 19:40:44 -- spdk/autotest.sh@48 -- # udevadm_pid=3361480 00:02:43.477 19:40:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:43.477 19:40:44 -- pm/common@17 -- # local monitor 00:02:43.477 19:40:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.477 19:40:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.477 19:40:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.477 19:40:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.477 19:40:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.478 19:40:44 -- pm/common@21 -- # date +%s 00:02:43.478 19:40:44 -- pm/common@25 -- # sleep 1 00:02:43.478 19:40:44 -- pm/common@21 -- # date +%s 00:02:43.478 19:40:44 -- pm/common@21 -- # date +%s 00:02:43.478 19:40:44 -- pm/common@21 -- # date +%s 00:02:43.478 19:40:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732646444 00:02:43.478 19:40:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732646444 00:02:43.478 19:40:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732646444 00:02:43.478 19:40:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732646444 00:02:43.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732646444_collect-cpu-load.pm.log 00:02:43.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732646444_collect-vmstat.pm.log 00:02:43.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732646444_collect-cpu-temp.pm.log 00:02:43.739 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732646444_collect-bmc-pm.bmc.pm.log 00:02:44.684 19:40:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:44.684 19:40:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:44.684 19:40:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.684 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:44.684 19:40:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:44.684 19:40:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:44.684 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:02:44.684 19:40:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:44.684 19:40:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.684 19:40:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.684 19:40:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:44.684 19:40:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.684 19:40:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:44.684 19:40:45 -- common/autotest_common.sh@1457 -- # uname 00:02:44.684 19:40:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:44.684 19:40:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:44.684 19:40:45 -- common/autotest_common.sh@1477 -- # uname 00:02:44.684 19:40:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:44.684 19:40:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:44.684 19:40:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:44.684 lcov: LCOV version 1.15 00:02:44.684 19:40:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:59.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:59.595 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:17.720 19:41:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:17.720 19:41:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:17.720 19:41:15 -- common/autotest_common.sh@10 -- # set +x 00:03:17.720 19:41:15 -- spdk/autotest.sh@78 -- # rm -f 00:03:17.720 19:41:15 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.663 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:18.663 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:18.663 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:18.924 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:18.924 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:18.924 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:18.924 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:19.185 19:41:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:19.185 19:41:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:19.185 19:41:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:19.185 19:41:19 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:19.185 19:41:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:19.185 19:41:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:19.185 19:41:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:19.185 19:41:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.185 19:41:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:19.185 19:41:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:19.185 19:41:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.185 19:41:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:19.185 19:41:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:19.185 19:41:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:19.185 19:41:19 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:19.185 No valid GPT data, bailing 00:03:19.185 19:41:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:19.185 19:41:19 -- scripts/common.sh@394 -- # pt= 00:03:19.185 19:41:19 -- scripts/common.sh@395 -- # return 1 00:03:19.185 19:41:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:19.185 1+0 records in 00:03:19.185 1+0 records out 00:03:19.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462715 s, 227 MB/s 00:03:19.185 19:41:19 -- spdk/autotest.sh@105 -- # sync 00:03:19.185 19:41:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:19.185 19:41:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:19.185 19:41:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.201 19:41:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:29.201 19:41:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:29.201 19:41:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:29.201 19:41:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.746 Hugepages 00:03:31.746 node hugesize free / total 00:03:31.746 node0 1048576kB 0 / 0 00:03:31.746 node0 2048kB 0 / 0 00:03:31.746 node1 1048576kB 0 / 0 00:03:31.746 node1 2048kB 0 / 0 00:03:31.746 00:03:31.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.746 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:31.746 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:31.746 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:31.746 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:31.746 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:31.746 19:41:32 -- spdk/autotest.sh@117 -- # uname -s 00:03:31.746 19:41:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:31.746 19:41:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:31.746 19:41:32 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.049 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:35.049 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:35.311 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:37.224 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:37.224 19:41:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:38.604 19:41:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:38.604 19:41:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:38.604 19:41:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:38.604 19:41:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:38.604 19:41:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.604 19:41:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.604 19:41:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.604 19:41:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.604 19:41:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.604 19:41:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.604 19:41:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:38.604 19:41:39 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.904 Waiting for block devices as requested 00:03:41.904 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:41.904 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:42.164 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:42.164 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:42.164 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:42.164 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:42.424 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:42.424 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:42.424 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:42.684 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:42.945 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:42.945 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:42.945 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:42.945 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:43.295 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:43.295 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:43.295 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:43.558 19:41:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:43.558 19:41:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:43.558 19:41:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:43.558 19:41:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:43.558 19:41:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:43.558 19:41:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:43.558 19:41:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:43.559 19:41:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:43.559 19:41:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:43.559 19:41:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:43.559 19:41:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:43.559 19:41:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:43.559 19:41:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:43.559 19:41:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:43.559 19:41:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:43.559 19:41:44 -- common/autotest_common.sh@1543 -- # continue 00:03:43.559 19:41:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:43.559 19:41:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.559 19:41:44 -- common/autotest_common.sh@10 -- # set +x 00:03:43.819 19:41:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:43.819 19:41:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.819 19:41:44 -- common/autotest_common.sh@10 -- # set +x 00:03:43.819 19:41:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.123 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:47.123 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:47.383 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:47.643 19:41:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:47.643 19:41:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.643 19:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.903 19:41:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:47.903 19:41:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:47.903 19:41:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.903 19:41:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:47.903 19:41:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:47.903 19:41:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:47.903 19:41:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:47.903 19:41:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:47.903 19:41:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.903 19:41:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.903 19:41:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.903 19:41:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.903 19:41:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.903 19:41:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:47.903 19:41:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:47.903 19:41:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.903 19:41:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:47.903 19:41:48 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:47.903 19:41:48 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:47.903 19:41:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:47.903 19:41:48 -- common/autotest_common.sh@1572 -- # return 0 00:03:47.903 19:41:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:47.903 19:41:48 -- common/autotest_common.sh@1580 -- # return 0 00:03:47.903 19:41:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:47.903 19:41:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:47.903 19:41:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.903 19:41:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:47.903 19:41:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:47.903 19:41:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.903 19:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.903 19:41:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:47.903 19:41:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:47.903 19:41:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.903 19:41:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.903 19:41:48 -- common/autotest_common.sh@10 -- # set +x 00:03:47.903 ************************************ 00:03:47.903 START TEST env 00:03:47.903 ************************************ 00:03:47.903 19:41:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:48.164 * Looking for test storage... 00:03:48.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.164 19:41:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.164 19:41:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.164 19:41:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.164 19:41:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.164 19:41:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.164 19:41:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.164 19:41:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.164 19:41:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.164 19:41:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.164 19:41:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.164 19:41:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.164 19:41:48 env -- scripts/common.sh@344 -- # case "$op" in 00:03:48.164 19:41:48 env -- scripts/common.sh@345 -- # : 1 00:03:48.164 19:41:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.164 19:41:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.164 19:41:48 env -- scripts/common.sh@365 -- # decimal 1 00:03:48.164 19:41:48 env -- scripts/common.sh@353 -- # local d=1 00:03:48.164 19:41:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.164 19:41:48 env -- scripts/common.sh@355 -- # echo 1 00:03:48.164 19:41:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.164 19:41:48 env -- scripts/common.sh@366 -- # decimal 2 00:03:48.164 19:41:48 env -- scripts/common.sh@353 -- # local d=2 00:03:48.164 19:41:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.164 19:41:48 env -- scripts/common.sh@355 -- # echo 2 00:03:48.164 19:41:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.164 19:41:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.164 19:41:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.164 19:41:48 env -- scripts/common.sh@368 -- # return 0 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.164 --rc genhtml_branch_coverage=1 00:03:48.164 --rc genhtml_function_coverage=1 00:03:48.164 --rc genhtml_legend=1 00:03:48.164 --rc geninfo_all_blocks=1 00:03:48.164 --rc geninfo_unexecuted_blocks=1 00:03:48.164 00:03:48.164 ' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.164 --rc genhtml_branch_coverage=1 00:03:48.164 --rc genhtml_function_coverage=1 00:03:48.164 --rc genhtml_legend=1 00:03:48.164 --rc geninfo_all_blocks=1 00:03:48.164 --rc geninfo_unexecuted_blocks=1 00:03:48.164 00:03:48.164 ' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.164 --rc genhtml_branch_coverage=1 00:03:48.164 --rc genhtml_function_coverage=1 00:03:48.164 --rc genhtml_legend=1 00:03:48.164 --rc geninfo_all_blocks=1 00:03:48.164 --rc geninfo_unexecuted_blocks=1 00:03:48.164 00:03:48.164 ' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.164 --rc genhtml_branch_coverage=1 00:03:48.164 --rc genhtml_function_coverage=1 00:03:48.164 --rc genhtml_legend=1 00:03:48.164 --rc geninfo_all_blocks=1 00:03:48.164 --rc geninfo_unexecuted_blocks=1 00:03:48.164 00:03:48.164 ' 00:03:48.164 19:41:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.164 19:41:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.164 19:41:48 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.164 ************************************ 00:03:48.164 START TEST env_memory 00:03:48.164 ************************************ 00:03:48.164 19:41:48 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:48.164 00:03:48.164 00:03:48.164 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.164 http://cunit.sourceforge.net/ 00:03:48.164 00:03:48.164 00:03:48.164 Suite: memory 00:03:48.164 Test: alloc and free memory map ...[2024-11-26 19:41:48.936873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:48.164 passed 00:03:48.164 Test: mem map translation ...[2024-11-26 19:41:48.962593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:48.164 [2024-11-26 19:41:48.962625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:48.164 [2024-11-26 19:41:48.962672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:48.164 [2024-11-26 19:41:48.962680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:48.425 passed 00:03:48.425 Test: mem map registration ...[2024-11-26 19:41:49.017939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:48.425 [2024-11-26 19:41:49.017964] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:48.425 passed 00:03:48.425 Test: mem map adjacent registrations ...passed 00:03:48.425 00:03:48.425 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.425 suites 1 1 n/a 0 0 00:03:48.425 tests 4 4 4 0 0 00:03:48.425 asserts 152 152 152 0 n/a 00:03:48.425 00:03:48.425 Elapsed time = 0.192 seconds 00:03:48.425 00:03:48.425 real 0m0.207s 00:03:48.425 user 0m0.195s 00:03:48.425 sys 0m0.011s 00:03:48.425 19:41:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.425 19:41:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:48.425 ************************************ 00:03:48.425 END TEST env_memory 00:03:48.425 ************************************ 00:03:48.425 19:41:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.425 19:41:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.425 19:41:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.425 19:41:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.425 ************************************ 00:03:48.425 START TEST env_vtophys 00:03:48.425 ************************************ 00:03:48.425 19:41:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:48.425 EAL: lib.eal log level changed from notice to debug 00:03:48.425 EAL: Detected lcore 0 as core 0 on socket 0 00:03:48.425 EAL: Detected lcore 1 as core 1 on socket 0 00:03:48.425 EAL: Detected lcore 2 as core 2 on socket 0 00:03:48.425 EAL: Detected lcore 3 as core 3 on socket 0 00:03:48.425 EAL: Detected lcore 4 as core 4 on socket 0 00:03:48.425 EAL: Detected lcore 5 as core 5 on socket 0 00:03:48.425 EAL: Detected lcore 6 as core 6 on socket 0 00:03:48.425 EAL: Detected lcore 7 as core 7 on socket 0 00:03:48.425 EAL: Detected lcore 8 as core 8 on socket 0 00:03:48.425 EAL: Detected lcore 9 as core 9 on socket 0 00:03:48.425 EAL: Detected lcore 10 as core 10 on socket 0 00:03:48.425 EAL: Detected lcore 11 as core 11 on socket 0 00:03:48.425 EAL: Detected lcore 12 as core 12 on socket 0 00:03:48.425 EAL: Detected lcore 13 as core 13 on socket 0 00:03:48.425 EAL: Detected lcore 14 as core 14 on socket 0 00:03:48.425 EAL: Detected lcore 15 as core 15 on socket 0 00:03:48.425 EAL: Detected lcore 16 as core 16 on socket 0 00:03:48.425 EAL: Detected lcore 17 as core 17 on socket 0 00:03:48.425 EAL: Detected lcore 18 as core 18 on socket 0 00:03:48.425 EAL: Detected lcore 19 as core 19 on socket 0 00:03:48.425 EAL: Detected lcore 20 as core 20 on socket 0 00:03:48.425 EAL: Detected lcore 21 as core 21 on socket 0 00:03:48.425 EAL: Detected lcore 22 as core 22 on socket 0 00:03:48.425 EAL: Detected lcore 23 as core 23 on socket 0 00:03:48.425 EAL: Detected lcore 24 as core 24 on socket 0 00:03:48.425 EAL: Detected lcore 25 as core 25 on socket 0 00:03:48.425 EAL: Detected lcore 26 as core 26 on socket 0 00:03:48.425 EAL: Detected lcore 27 as core 27 on socket 0 00:03:48.425 EAL: Detected lcore 28 as core 28 on socket 0 00:03:48.425 EAL: Detected lcore 29 as core 29 on socket 0 00:03:48.425 EAL: Detected lcore 30 as core 30 on socket 0 00:03:48.425 EAL: Detected lcore 31 as core 31 on socket 0 00:03:48.425 EAL: Detected lcore 32 as core 32 on socket 0 00:03:48.425 EAL: Detected lcore 33 as core 33 on socket 0 00:03:48.426 EAL: Detected lcore 34 as core 34 on socket 0 00:03:48.426 EAL: Detected lcore 35 as core 35 on socket 0 00:03:48.426 EAL: Detected lcore 36 as core 0 on socket 1 00:03:48.426 EAL: Detected lcore 37 as core 1 on socket 1 00:03:48.426 EAL: Detected lcore 38 as core 2 on socket 1 00:03:48.426 EAL: Detected lcore 39 as core 3 on socket 1 00:03:48.426 EAL: Detected lcore 40 as core 4 on socket 1 00:03:48.426 EAL: Detected lcore 41 as core 5 on socket 1 00:03:48.426 EAL: Detected lcore 42 as core 6 on socket 1 00:03:48.426 EAL: Detected lcore 43 as core 7 on socket 1 00:03:48.426 EAL: Detected lcore 44 as core 8 on socket 1 00:03:48.426 EAL: Detected lcore 45 as core 9 on socket 1 00:03:48.426 EAL: Detected lcore 46 as core 10 on socket 1 00:03:48.426 EAL: Detected lcore 47 as core 11 on socket 1 00:03:48.426 EAL: Detected lcore 48 as core 12 on socket 1 00:03:48.426 EAL: Detected lcore 49 as core 13 on socket 1 00:03:48.426 EAL: Detected lcore 50 as core 14 on socket 1 00:03:48.426 EAL: Detected lcore 51 as core 15 on socket 1 00:03:48.426 EAL: Detected lcore 52 as core 16 on socket 1 00:03:48.426 EAL: Detected lcore 53 as core 17 on socket 1 00:03:48.426 EAL: Detected lcore 54 as core 18 on socket 1 00:03:48.426 EAL: Detected lcore 55 as core 19 on socket 1 00:03:48.426 EAL: Detected lcore 56 as core 20 on socket 1 00:03:48.426 EAL: Detected lcore 57 as core 21 on socket 1 00:03:48.426 EAL: Detected lcore 58 as core 22 on socket 1 00:03:48.426 EAL: Detected lcore 59 as core 23 on socket 1 00:03:48.426 EAL: Detected lcore 60 as core 24 on socket 1 00:03:48.426 EAL: Detected lcore 61 as core 25 on socket 1 00:03:48.426 EAL: Detected lcore 62 as core 26 on socket 1 00:03:48.426 EAL: Detected lcore 63 as core 27 on socket 1 00:03:48.426 EAL: Detected lcore 64 as core 28 on socket 1 00:03:48.426 EAL: Detected lcore 65 as core 29 on socket 1 00:03:48.426 EAL: Detected lcore 66 as core 30 on socket 1 00:03:48.426 EAL: Detected lcore 67 as core 31 on socket 1 00:03:48.426 EAL: Detected lcore 68 as core 32 on socket 1 00:03:48.426 EAL: Detected lcore 69 as core 33 on socket 1 00:03:48.426 EAL: Detected lcore 70 as core 34 on socket 1 00:03:48.426 EAL: Detected lcore 71 as core 35 on socket 1 00:03:48.426 EAL: Detected lcore 72 as core 0 on socket 0 00:03:48.426 EAL: Detected lcore 73 as core 1 on socket 0 00:03:48.426 EAL: Detected lcore 74 as core 2 on socket 0 00:03:48.426 EAL: Detected lcore 75 as core 3 on socket 0 00:03:48.426 EAL: Detected lcore 76 as core 4 on socket 0 00:03:48.426 EAL: Detected lcore 77 as core 5 on socket 0 00:03:48.426 EAL: Detected lcore 78 as core 6 on socket 0 00:03:48.426 EAL: Detected lcore 79 as core 7 on socket 0 00:03:48.426 EAL: Detected lcore 80 as core 8 on socket 0 00:03:48.426 EAL: Detected lcore 81 as core 9 on socket 0 00:03:48.426 EAL: Detected lcore 82 as core 10 on socket 0 00:03:48.426 EAL: Detected lcore 83 as core 11 on socket 0 00:03:48.426 EAL: Detected lcore 84 as core 12 on socket 0 00:03:48.426 EAL: Detected lcore 85 as core 13 on socket 0 00:03:48.426 EAL: Detected lcore 86 as core 14 on socket 0 00:03:48.426 EAL: Detected lcore 87 as core 15 on socket 0 00:03:48.426 EAL: Detected lcore 88 as core 16 on socket 0 00:03:48.426 EAL: Detected lcore 89 as core 17 on socket 0 00:03:48.426 EAL: Detected lcore 90 as core 18 on socket 0 00:03:48.426 EAL: Detected lcore 91 as core 19 on socket 0 00:03:48.426 EAL: Detected lcore 92 as core 20 on socket 0 00:03:48.426 EAL: Detected lcore 93 as core 21 on socket 0 00:03:48.426 EAL: Detected lcore 94 as core 22 on socket 0 00:03:48.426 EAL: Detected lcore 95 as core 23 on socket 0 00:03:48.426 EAL: Detected lcore 96 as core 24 on socket 0 00:03:48.426 EAL: Detected lcore 97 as core 25 on socket 0 00:03:48.426 EAL: Detected lcore 98 as core 26 on socket 0 00:03:48.426 EAL: Detected lcore 99 as core 27 on socket 0 00:03:48.426 EAL: Detected lcore 100 as core 28 on socket 0 00:03:48.426 EAL: Detected lcore 101 as core 29 on socket 0 00:03:48.426 EAL: Detected lcore 102 as core 30 on socket 0 00:03:48.426 EAL: Detected lcore 103 as core 31 on socket 0 00:03:48.426 EAL: Detected lcore 104 as core 32 on socket 0 00:03:48.426 EAL: Detected lcore 105 as core 33 on socket 0 00:03:48.426 EAL: Detected lcore 106 as core 34 on socket 0 00:03:48.426 EAL: Detected lcore 107 as core 35 on socket 0 00:03:48.426 EAL: Detected lcore 108 as core 0 on socket 1 00:03:48.426 EAL: Detected lcore 109 as core 1 on socket 1 00:03:48.426 EAL: Detected lcore 110 as core 2 on socket 1 00:03:48.426 EAL: Detected lcore 111 as core 3 on socket 1 00:03:48.426 EAL: Detected lcore 112 as core 4 on socket 1 00:03:48.426 EAL: Detected lcore 113 as core 5 on socket 1 00:03:48.426 EAL: Detected lcore 114 as core 6 on socket 1 00:03:48.426 EAL: Detected lcore 115 as core 7 on socket 1 00:03:48.426 EAL: Detected lcore 116 as core 8 on socket 1 00:03:48.426 EAL: Detected lcore 117 as core 9 on socket 1 00:03:48.426 EAL: Detected lcore 118 as core 10 on socket 1 00:03:48.426 EAL: Detected lcore 119 as core 11 on socket 1 00:03:48.426 EAL: Detected lcore 120 as core 12 on socket 1 00:03:48.426 EAL: Detected lcore 121 as core 13 on socket 1 00:03:48.426 EAL: Detected lcore 122 as core 14 on socket 1 00:03:48.426 EAL: Detected lcore 123 as core 15 on socket 1 00:03:48.426 EAL: Detected lcore 124 as core 16 on socket 1 00:03:48.426 EAL: Detected lcore 125 as core 17 on socket 1 00:03:48.426 EAL: Detected lcore 126 as core 18 on socket 1 00:03:48.426 EAL: Detected lcore 127 as core 19 on socket 1 00:03:48.426 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:48.426 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:48.426 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:48.426 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:48.426 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:48.426 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:48.426 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:48.426 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:48.426 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:48.426 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:48.426 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:48.426 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:48.426 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:48.426 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:48.426 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:48.426 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:48.426 EAL: Maximum logical cores by configuration: 128 00:03:48.426 EAL: Detected CPU lcores: 128 00:03:48.426 EAL: Detected NUMA nodes: 2 00:03:48.426 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:48.426 EAL: Detected shared linkage of DPDK 00:03:48.426 EAL: No shared files mode enabled, IPC will be disabled 00:03:48.426 EAL: Bus pci wants IOVA as 'DC' 00:03:48.426 EAL: Buses did not request a specific IOVA mode. 00:03:48.426 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:48.426 EAL: Selected IOVA mode 'VA' 00:03:48.426 EAL: Probing VFIO support... 00:03:48.426 EAL: IOMMU type 1 (Type 1) is supported 00:03:48.426 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:48.426 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:48.426 EAL: VFIO support initialized 00:03:48.426 EAL: Ask a virtual area of 0x2e000 bytes 00:03:48.426 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:48.426 EAL: Setting up physically contiguous memory... 00:03:48.426 EAL: Setting maximum number of open files to 524288 00:03:48.426 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:48.426 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:48.426 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:48.426 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:48.426 EAL: Ask a virtual area of 0x61000 bytes 00:03:48.426 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:48.426 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:48.426 EAL: Ask a virtual area of 0x400000000 bytes 00:03:48.426 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:48.426 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:48.426 EAL: Hugepages will be freed exactly as allocated. 00:03:48.426 EAL: No shared files mode enabled, IPC is disabled 00:03:48.426 EAL: No shared files mode enabled, IPC is disabled 00:03:48.426 EAL: TSC frequency is ~2400000 KHz 00:03:48.426 EAL: Main lcore 0 is ready (tid=7f84125fca00;cpuset=[0]) 00:03:48.427 EAL: Trying to obtain current memory policy. 00:03:48.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.427 EAL: Restoring previous memory policy: 0 00:03:48.427 EAL: request: mp_malloc_sync 00:03:48.427 EAL: No shared files mode enabled, IPC is disabled 00:03:48.427 EAL: Heap on socket 0 was expanded by 2MB 00:03:48.427 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:48.686 EAL: Mem event callback 'spdk:(nil)' registered 00:03:48.686 00:03:48.686 00:03:48.686 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.686 http://cunit.sourceforge.net/ 00:03:48.686 00:03:48.686 00:03:48.686 Suite: components_suite 00:03:48.686 Test: vtophys_malloc_test ...passed 00:03:48.686 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:48.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.686 EAL: Restoring previous memory policy: 4 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was expanded by 4MB 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was shrunk by 4MB 00:03:48.686 EAL: Trying to obtain current memory policy. 00:03:48.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.686 EAL: Restoring previous memory policy: 4 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was expanded by 6MB 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was shrunk by 6MB 00:03:48.686 EAL: Trying to obtain current memory policy. 00:03:48.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.686 EAL: Restoring previous memory policy: 4 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was expanded by 10MB 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was shrunk by 10MB 00:03:48.686 EAL: Trying to obtain current memory policy. 00:03:48.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.686 EAL: Restoring previous memory policy: 4 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.686 EAL: No shared files mode enabled, IPC is disabled 00:03:48.686 EAL: Heap on socket 0 was expanded by 18MB 00:03:48.686 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.686 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was shrunk by 18MB 00:03:48.687 EAL: Trying to obtain current memory policy. 00:03:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.687 EAL: Restoring previous memory policy: 4 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was expanded by 34MB 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was shrunk by 34MB 00:03:48.687 EAL: Trying to obtain current memory policy. 00:03:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.687 EAL: Restoring previous memory policy: 4 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was expanded by 66MB 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was shrunk by 66MB 00:03:48.687 EAL: Trying to obtain current memory policy. 00:03:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.687 EAL: Restoring previous memory policy: 4 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was expanded by 130MB 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was shrunk by 130MB 00:03:48.687 EAL: Trying to obtain current memory policy. 00:03:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.687 EAL: Restoring previous memory policy: 4 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was expanded by 258MB 00:03:48.687 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.687 EAL: request: mp_malloc_sync 00:03:48.687 EAL: No shared files mode enabled, IPC is disabled 00:03:48.687 EAL: Heap on socket 0 was shrunk by 258MB 00:03:48.687 EAL: Trying to obtain current memory policy. 00:03:48.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.948 EAL: Restoring previous memory policy: 4 00:03:48.948 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.948 EAL: request: mp_malloc_sync 00:03:48.948 EAL: No shared files mode enabled, IPC is disabled 00:03:48.948 EAL: Heap on socket 0 was expanded by 514MB 00:03:48.948 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.948 EAL: request: mp_malloc_sync 00:03:48.948 EAL: No shared files mode enabled, IPC is disabled 00:03:48.948 EAL: Heap on socket 0 was shrunk by 514MB 00:03:48.948 EAL: Trying to obtain current memory policy. 00:03:48.948 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.208 EAL: Restoring previous memory policy: 4 00:03:49.208 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.208 EAL: request: mp_malloc_sync 00:03:49.208 EAL: No shared files mode enabled, IPC is disabled 00:03:49.208 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.208 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.209 EAL: request: mp_malloc_sync 00:03:49.209 EAL: No shared files mode enabled, IPC is disabled 00:03:49.209 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:49.209 passed 00:03:49.209 00:03:49.209 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.209 suites 1 1 n/a 0 0 00:03:49.209 tests 2 2 2 0 0 00:03:49.209 asserts 497 497 497 0 n/a 00:03:49.209 00:03:49.209 Elapsed time = 0.689 seconds 00:03:49.209 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.209 EAL: request: mp_malloc_sync 00:03:49.209 EAL: No shared files mode enabled, IPC is disabled 00:03:49.209 EAL: Heap on socket 0 was shrunk by 2MB 00:03:49.209 EAL: No shared files mode enabled, IPC is disabled 00:03:49.209 EAL: No shared files mode enabled, IPC is disabled 00:03:49.209 EAL: No shared files mode enabled, IPC is disabled 00:03:49.209 00:03:49.209 real 0m0.845s 00:03:49.209 user 0m0.445s 00:03:49.209 sys 0m0.365s 00:03:49.209 19:41:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.209 19:41:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:49.209 ************************************ 00:03:49.209 END TEST env_vtophys 00:03:49.209 ************************************ 00:03:49.469 19:41:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.469 19:41:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.469 19:41:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.469 19:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.469 ************************************ 00:03:49.469 START TEST env_pci 00:03:49.469 ************************************ 00:03:49.469 19:41:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:49.469 00:03:49.469 00:03:49.469 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.469 http://cunit.sourceforge.net/ 00:03:49.469 00:03:49.469 00:03:49.469 Suite: pci 00:03:49.469 Test: pci_hook ...[2024-11-26 19:41:50.114263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3381462 has claimed it 00:03:49.470 EAL: Cannot find device (10000:00:01.0) 00:03:49.470 EAL: Failed to attach device on primary process 00:03:49.470 passed 00:03:49.470 00:03:49.470 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.470 suites 1 1 n/a 0 0 00:03:49.470 tests 1 1 1 0 0 00:03:49.470 asserts 25 25 25 0 n/a 00:03:49.470 00:03:49.470 Elapsed time = 0.030 seconds 00:03:49.470 00:03:49.470 real 0m0.051s 00:03:49.470 user 0m0.016s 00:03:49.470 sys 0m0.034s 00:03:49.470 19:41:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.470 19:41:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:49.470 ************************************ 00:03:49.470 END TEST env_pci 00:03:49.470 ************************************ 00:03:49.470 19:41:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:49.470 19:41:50 env -- env/env.sh@15 -- # uname 00:03:49.470 19:41:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:49.470 19:41:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:49.470 19:41:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.470 19:41:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:49.470 19:41:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.470 19:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.470 ************************************ 00:03:49.470 START TEST env_dpdk_post_init 00:03:49.470 ************************************ 00:03:49.470 19:41:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:49.470 EAL: Detected CPU lcores: 128 00:03:49.470 EAL: Detected NUMA nodes: 2 00:03:49.470 EAL: Detected shared linkage of DPDK 00:03:49.470 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.731 EAL: Selected IOVA mode 'VA' 00:03:49.731 EAL: VFIO support initialized 00:03:49.731 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.731 EAL: Using IOMMU type 1 (Type 1) 00:03:49.731 EAL: Ignore mapping IO port bar(1) 00:03:49.991 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:49.991 EAL: Ignore mapping IO port bar(1) 00:03:50.251 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:50.251 EAL: Ignore mapping IO port bar(1) 00:03:50.251 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:50.512 EAL: Ignore mapping IO port bar(1) 00:03:50.512 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:50.773 EAL: Ignore mapping IO port bar(1) 00:03:50.773 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:51.033 EAL: Ignore mapping IO port bar(1) 00:03:51.033 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:51.033 EAL: Ignore mapping IO port bar(1) 00:03:51.294 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:51.294 EAL: Ignore mapping IO port bar(1) 00:03:51.554 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:51.813 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:51.814 EAL: Ignore mapping IO port bar(1) 00:03:51.814 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:52.073 EAL: Ignore mapping IO port bar(1) 00:03:52.073 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:52.333 EAL: Ignore mapping IO port bar(1) 00:03:52.333 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:52.593 EAL: Ignore mapping IO port bar(1) 00:03:52.593 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:52.853 EAL: Ignore mapping IO port bar(1) 00:03:52.853 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:52.853 EAL: Ignore mapping IO port bar(1) 00:03:53.113 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:53.113 EAL: Ignore mapping IO port bar(1) 00:03:53.374 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:53.374 EAL: Ignore mapping IO port bar(1) 00:03:53.634 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:53.634 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:53.634 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:53.634 Starting DPDK initialization... 00:03:53.634 Starting SPDK post initialization... 00:03:53.634 SPDK NVMe probe 00:03:53.634 Attaching to 0000:65:00.0 00:03:53.634 Attached to 0000:65:00.0 00:03:53.634 Cleaning up... 00:03:55.546 00:03:55.546 real 0m5.747s 00:03:55.546 user 0m0.110s 00:03:55.546 sys 0m0.196s 00:03:55.546 19:41:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.546 19:41:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST env_dpdk_post_init 00:03:55.546 ************************************ 00:03:55.546 19:41:56 env -- env/env.sh@26 -- # uname 00:03:55.546 19:41:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.546 19:41:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.546 19:41:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.546 19:41:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.546 19:41:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST env_mem_callbacks 00:03:55.546 ************************************ 00:03:55.546 19:41:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.546 EAL: Detected CPU lcores: 128 00:03:55.546 EAL: Detected NUMA nodes: 2 00:03:55.546 EAL: Detected shared linkage of DPDK 00:03:55.546 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.546 EAL: Selected IOVA mode 'VA' 00:03:55.546 EAL: VFIO support initialized 00:03:55.546 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.546 00:03:55.546 00:03:55.546 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.546 http://cunit.sourceforge.net/ 00:03:55.546 00:03:55.546 00:03:55.546 Suite: memory 00:03:55.546 Test: test ... 00:03:55.546 register 0x200000200000 2097152 00:03:55.546 malloc 3145728 00:03:55.546 register 0x200000400000 4194304 00:03:55.546 buf 0x200000500000 len 3145728 PASSED 00:03:55.546 malloc 64 00:03:55.546 buf 0x2000004fff40 len 64 PASSED 00:03:55.546 malloc 4194304 00:03:55.546 register 0x200000800000 6291456 00:03:55.546 buf 0x200000a00000 len 4194304 PASSED 00:03:55.546 free 0x200000500000 3145728 00:03:55.546 free 0x2000004fff40 64 00:03:55.546 unregister 0x200000400000 4194304 PASSED 00:03:55.546 free 0x200000a00000 4194304 00:03:55.546 unregister 0x200000800000 6291456 PASSED 00:03:55.546 malloc 8388608 00:03:55.546 register 0x200000400000 10485760 00:03:55.546 buf 0x200000600000 len 8388608 PASSED 00:03:55.546 free 0x200000600000 8388608 00:03:55.546 unregister 0x200000400000 10485760 PASSED 00:03:55.546 passed 00:03:55.546 00:03:55.546 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.546 suites 1 1 n/a 0 0 00:03:55.546 tests 1 1 1 0 0 00:03:55.546 asserts 15 15 15 0 n/a 00:03:55.546 00:03:55.546 Elapsed time = 0.010 seconds 00:03:55.546 00:03:55.546 real 0m0.070s 00:03:55.546 user 0m0.023s 00:03:55.546 sys 0m0.047s 00:03:55.546 19:41:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.546 19:41:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST env_mem_callbacks 00:03:55.546 ************************************ 00:03:55.546 00:03:55.546 real 0m7.543s 00:03:55.546 user 0m1.058s 00:03:55.546 sys 0m1.047s 00:03:55.546 19:41:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.546 19:41:56 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 END TEST env 00:03:55.546 ************************************ 00:03:55.546 19:41:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.546 19:41:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.546 19:41:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.546 19:41:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.546 ************************************ 00:03:55.546 START TEST rpc 00:03:55.546 ************************************ 00:03:55.546 19:41:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.807 * Looking for test storage... 00:03:55.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.807 19:41:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.807 19:41:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.807 19:41:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.807 19:41:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.807 19:41:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.807 19:41:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:55.807 19:41:56 rpc -- scripts/common.sh@345 -- # : 1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.807 19:41:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.807 19:41:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@353 -- # local d=1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.807 19:41:56 rpc -- scripts/common.sh@355 -- # echo 1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.807 19:41:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@353 -- # local d=2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.807 19:41:56 rpc -- scripts/common.sh@355 -- # echo 2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.807 19:41:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.807 19:41:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.807 19:41:56 rpc -- scripts/common.sh@368 -- # return 0 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.807 --rc genhtml_branch_coverage=1 00:03:55.807 --rc genhtml_function_coverage=1 00:03:55.807 --rc genhtml_legend=1 00:03:55.807 --rc geninfo_all_blocks=1 00:03:55.807 --rc geninfo_unexecuted_blocks=1 00:03:55.807 00:03:55.807 ' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.807 --rc genhtml_branch_coverage=1 00:03:55.807 --rc genhtml_function_coverage=1 00:03:55.807 --rc genhtml_legend=1 00:03:55.807 --rc geninfo_all_blocks=1 00:03:55.807 --rc geninfo_unexecuted_blocks=1 00:03:55.807 00:03:55.807 ' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.807 --rc genhtml_branch_coverage=1 00:03:55.807 --rc genhtml_function_coverage=1 00:03:55.807 --rc genhtml_legend=1 00:03:55.807 --rc geninfo_all_blocks=1 00:03:55.807 --rc geninfo_unexecuted_blocks=1 00:03:55.807 00:03:55.807 ' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.807 --rc genhtml_branch_coverage=1 00:03:55.807 --rc genhtml_function_coverage=1 00:03:55.807 --rc genhtml_legend=1 00:03:55.807 --rc geninfo_all_blocks=1 00:03:55.807 --rc geninfo_unexecuted_blocks=1 00:03:55.807 00:03:55.807 ' 00:03:55.807 19:41:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3382923 00:03:55.807 19:41:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.807 19:41:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3382923 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 3382923 ']' 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.807 19:41:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.807 19:41:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:55.807 [2024-11-26 19:41:56.536435] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:03:55.807 [2024-11-26 19:41:56.536511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382923 ] 00:03:56.068 [2024-11-26 19:41:56.630798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.068 [2024-11-26 19:41:56.683409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.068 [2024-11-26 19:41:56.683458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3382923' to capture a snapshot of events at runtime. 00:03:56.068 [2024-11-26 19:41:56.683466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.068 [2024-11-26 19:41:56.683473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.068 [2024-11-26 19:41:56.683480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3382923 for offline analysis/debug. 00:03:56.068 [2024-11-26 19:41:56.684243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.638 19:41:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.638 19:41:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:56.638 19:41:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.638 19:41:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.638 19:41:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:56.638 19:41:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:56.638 19:41:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.638 19:41:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.638 19:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.638 ************************************ 00:03:56.638 START TEST rpc_integrity 00:03:56.638 ************************************ 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:56.638 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.638 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.907 { 00:03:56.907 "name": "Malloc0", 00:03:56.907 "aliases": [ 00:03:56.907 "943a02cf-a9b2-47c0-978e-41a96defb497" 00:03:56.907 ], 00:03:56.907 "product_name": "Malloc disk", 00:03:56.907 "block_size": 512, 00:03:56.907 "num_blocks": 16384, 00:03:56.907 "uuid": "943a02cf-a9b2-47c0-978e-41a96defb497", 00:03:56.907 "assigned_rate_limits": { 00:03:56.907 "rw_ios_per_sec": 0, 00:03:56.907 "rw_mbytes_per_sec": 0, 00:03:56.907 "r_mbytes_per_sec": 0, 00:03:56.907 "w_mbytes_per_sec": 0 00:03:56.907 }, 00:03:56.907 "claimed": false, 00:03:56.907 "zoned": false, 00:03:56.907 "supported_io_types": { 00:03:56.907 "read": true, 00:03:56.907 "write": true, 00:03:56.907 "unmap": true, 00:03:56.907 "flush": true, 00:03:56.907 "reset": true, 00:03:56.907 "nvme_admin": false, 00:03:56.907 "nvme_io": false, 00:03:56.907 "nvme_io_md": false, 00:03:56.907 "write_zeroes": true, 00:03:56.907 "zcopy": true, 00:03:56.907 "get_zone_info": false, 00:03:56.907 "zone_management": false, 00:03:56.907 "zone_append": false, 00:03:56.907 "compare": false, 00:03:56.907 "compare_and_write": false, 00:03:56.907 "abort": true, 00:03:56.907 "seek_hole": false, 00:03:56.907 "seek_data": false, 00:03:56.907 "copy": true, 00:03:56.907 "nvme_iov_md": false 00:03:56.907 }, 00:03:56.907 "memory_domains": [ 00:03:56.907 { 00:03:56.907 "dma_device_id": "system", 00:03:56.907 "dma_device_type": 1 00:03:56.907 }, 00:03:56.907 { 00:03:56.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.907 "dma_device_type": 2 00:03:56.907 } 00:03:56.907 ], 00:03:56.907 "driver_specific": {} 00:03:56.907 } 00:03:56.907 ]' 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.907 [2024-11-26 19:41:57.508164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:56.907 [2024-11-26 19:41:57.508209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.907 [2024-11-26 19:41:57.508226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf88800 00:03:56.907 [2024-11-26 19:41:57.508234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.907 [2024-11-26 19:41:57.509811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.907 [2024-11-26 19:41:57.509847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.907 Passthru0 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.907 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.907 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.907 { 00:03:56.907 "name": "Malloc0", 00:03:56.907 "aliases": [ 00:03:56.907 "943a02cf-a9b2-47c0-978e-41a96defb497" 00:03:56.907 ], 00:03:56.907 "product_name": "Malloc disk", 00:03:56.907 "block_size": 512, 00:03:56.907 "num_blocks": 16384, 00:03:56.907 "uuid": "943a02cf-a9b2-47c0-978e-41a96defb497", 00:03:56.907 "assigned_rate_limits": { 00:03:56.907 "rw_ios_per_sec": 0, 00:03:56.907 "rw_mbytes_per_sec": 0, 00:03:56.907 "r_mbytes_per_sec": 0, 00:03:56.907 "w_mbytes_per_sec": 0 00:03:56.907 }, 00:03:56.907 "claimed": true, 00:03:56.907 "claim_type": "exclusive_write", 00:03:56.907 "zoned": false, 00:03:56.907 "supported_io_types": { 00:03:56.907 "read": true, 00:03:56.907 "write": true, 00:03:56.907 "unmap": true, 00:03:56.907 "flush": true, 00:03:56.907 "reset": true, 00:03:56.907 "nvme_admin": false, 00:03:56.907 "nvme_io": false, 00:03:56.907 "nvme_io_md": false, 00:03:56.907 "write_zeroes": true, 00:03:56.907 "zcopy": true, 00:03:56.907 "get_zone_info": false, 00:03:56.907 "zone_management": false, 00:03:56.907 "zone_append": false, 00:03:56.907 "compare": false, 00:03:56.907 "compare_and_write": false, 00:03:56.907 "abort": true, 00:03:56.907 "seek_hole": false, 00:03:56.908 "seek_data": false, 00:03:56.908 "copy": true, 00:03:56.908 "nvme_iov_md": false 00:03:56.908 }, 00:03:56.908 "memory_domains": [ 00:03:56.908 { 00:03:56.908 "dma_device_id": "system", 00:03:56.908 "dma_device_type": 1 00:03:56.908 }, 00:03:56.908 { 00:03:56.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.908 "dma_device_type": 2 00:03:56.908 } 00:03:56.908 ], 00:03:56.908 "driver_specific": {} 00:03:56.908 }, 00:03:56.908 { 00:03:56.908 "name": "Passthru0", 00:03:56.908 "aliases": [ 00:03:56.908 "3fa91d6e-3f6b-5d2a-ac09-cdb29735550d" 00:03:56.908 ], 00:03:56.908 "product_name": "passthru", 00:03:56.908 "block_size": 512, 00:03:56.908 "num_blocks": 16384, 00:03:56.908 "uuid": "3fa91d6e-3f6b-5d2a-ac09-cdb29735550d", 00:03:56.908 "assigned_rate_limits": { 00:03:56.908 "rw_ios_per_sec": 0, 00:03:56.908 "rw_mbytes_per_sec": 0, 00:03:56.908 "r_mbytes_per_sec": 0, 00:03:56.908 "w_mbytes_per_sec": 0 00:03:56.908 }, 00:03:56.908 "claimed": false, 00:03:56.908 "zoned": false, 00:03:56.908 "supported_io_types": { 00:03:56.908 "read": true, 00:03:56.908 "write": true, 00:03:56.908 "unmap": true, 00:03:56.908 "flush": true, 00:03:56.908 "reset": true, 00:03:56.908 "nvme_admin": false, 00:03:56.908 "nvme_io": false, 00:03:56.908 "nvme_io_md": false, 00:03:56.908 "write_zeroes": true, 00:03:56.908 "zcopy": true, 00:03:56.908 "get_zone_info": false, 00:03:56.908 "zone_management": false, 00:03:56.908 "zone_append": false, 00:03:56.908 "compare": false, 00:03:56.908 "compare_and_write": false, 00:03:56.908 "abort": true, 00:03:56.908 "seek_hole": false, 00:03:56.908 "seek_data": false, 00:03:56.908 "copy": true, 00:03:56.908 "nvme_iov_md": false 00:03:56.908 }, 00:03:56.908 "memory_domains": [ 00:03:56.908 { 00:03:56.908 "dma_device_id": "system", 00:03:56.908 "dma_device_type": 1 00:03:56.908 }, 00:03:56.908 { 00:03:56.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.908 "dma_device_type": 2 00:03:56.908 } 00:03:56.908 ], 00:03:56.908 "driver_specific": { 00:03:56.908 "passthru": { 00:03:56.908 "name": "Passthru0", 00:03:56.908 "base_bdev_name": "Malloc0" 00:03:56.908 } 00:03:56.908 } 00:03:56.908 } 00:03:56.908 ]' 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:56.908 19:41:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:56.908 00:03:56.908 real 0m0.305s 00:03:56.908 user 0m0.181s 00:03:56.908 sys 0m0.057s 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.908 19:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.908 ************************************ 00:03:56.908 END TEST rpc_integrity 00:03:56.908 ************************************ 00:03:56.908 19:41:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:56.908 19:41:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.908 19:41:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.908 19:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.168 ************************************ 00:03:57.168 START TEST rpc_plugins 00:03:57.168 ************************************ 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.168 { 00:03:57.168 "name": "Malloc1", 00:03:57.168 "aliases": [ 00:03:57.168 "dc02126e-b1f1-43d5-bbb2-3e2ed4941326" 00:03:57.168 ], 00:03:57.168 "product_name": "Malloc disk", 00:03:57.168 "block_size": 4096, 00:03:57.168 "num_blocks": 256, 00:03:57.168 "uuid": "dc02126e-b1f1-43d5-bbb2-3e2ed4941326", 00:03:57.168 "assigned_rate_limits": { 00:03:57.168 "rw_ios_per_sec": 0, 00:03:57.168 "rw_mbytes_per_sec": 0, 00:03:57.168 "r_mbytes_per_sec": 0, 00:03:57.168 "w_mbytes_per_sec": 0 00:03:57.168 }, 00:03:57.168 "claimed": false, 00:03:57.168 "zoned": false, 00:03:57.168 "supported_io_types": { 00:03:57.168 "read": true, 00:03:57.168 "write": true, 00:03:57.168 "unmap": true, 00:03:57.168 "flush": true, 00:03:57.168 "reset": true, 00:03:57.168 "nvme_admin": false, 00:03:57.168 "nvme_io": false, 00:03:57.168 "nvme_io_md": false, 00:03:57.168 "write_zeroes": true, 00:03:57.168 "zcopy": true, 00:03:57.168 "get_zone_info": false, 00:03:57.168 "zone_management": false, 00:03:57.168 "zone_append": false, 00:03:57.168 "compare": false, 00:03:57.168 "compare_and_write": false, 00:03:57.168 "abort": true, 00:03:57.168 "seek_hole": false, 00:03:57.168 "seek_data": false, 00:03:57.168 "copy": true, 00:03:57.168 "nvme_iov_md": false 00:03:57.168 }, 00:03:57.168 "memory_domains": [ 00:03:57.168 { 00:03:57.168 "dma_device_id": "system", 00:03:57.168 "dma_device_type": 1 00:03:57.168 }, 00:03:57.168 { 00:03:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.168 "dma_device_type": 2 00:03:57.168 } 00:03:57.168 ], 00:03:57.168 "driver_specific": {} 00:03:57.168 } 00:03:57.168 ]' 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.168 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.168 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.169 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.169 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.169 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.169 19:41:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.169 00:03:57.169 real 0m0.158s 00:03:57.169 user 0m0.097s 00:03:57.169 sys 0m0.022s 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.169 19:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.169 ************************************ 00:03:57.169 END TEST rpc_plugins 00:03:57.169 ************************************ 00:03:57.169 19:41:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.169 19:41:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.169 19:41:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.169 19:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.430 ************************************ 00:03:57.430 START TEST rpc_trace_cmd_test 00:03:57.430 ************************************ 00:03:57.430 19:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:57.430 19:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.430 19:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.430 19:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.430 19:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.430 19:41:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.430 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.430 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3382923", 00:03:57.430 "tpoint_group_mask": "0x8", 00:03:57.430 "iscsi_conn": { 00:03:57.431 "mask": "0x2", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "scsi": { 00:03:57.431 "mask": "0x4", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "bdev": { 00:03:57.431 "mask": "0x8", 00:03:57.431 "tpoint_mask": "0xffffffffffffffff" 00:03:57.431 }, 00:03:57.431 "nvmf_rdma": { 00:03:57.431 "mask": "0x10", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "nvmf_tcp": { 00:03:57.431 "mask": "0x20", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "ftl": { 00:03:57.431 "mask": "0x40", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "blobfs": { 00:03:57.431 "mask": "0x80", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "dsa": { 00:03:57.431 "mask": "0x200", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "thread": { 00:03:57.431 "mask": "0x400", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "nvme_pcie": { 00:03:57.431 "mask": "0x800", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "iaa": { 00:03:57.431 "mask": "0x1000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "nvme_tcp": { 00:03:57.431 "mask": "0x2000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "bdev_nvme": { 00:03:57.431 "mask": "0x4000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "sock": { 00:03:57.431 "mask": "0x8000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "blob": { 00:03:57.431 "mask": "0x10000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "bdev_raid": { 00:03:57.431 "mask": "0x20000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 }, 00:03:57.431 "scheduler": { 00:03:57.431 "mask": "0x40000", 00:03:57.431 "tpoint_mask": "0x0" 00:03:57.431 } 00:03:57.431 }' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:57.431 00:03:57.431 real 0m0.216s 00:03:57.431 user 0m0.181s 00:03:57.431 sys 0m0.025s 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.431 19:41:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.431 ************************************ 00:03:57.431 END TEST rpc_trace_cmd_test 00:03:57.431 ************************************ 00:03:57.431 19:41:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.431 19:41:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.431 19:41:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.431 19:41:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.431 19:41:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.431 19:41:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 START TEST rpc_daemon_integrity 00:03:57.692 ************************************ 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.692 { 00:03:57.692 "name": "Malloc2", 00:03:57.692 "aliases": [ 00:03:57.692 "929fcedc-59c0-4772-a324-5818679a4071" 00:03:57.692 ], 00:03:57.692 "product_name": "Malloc disk", 00:03:57.692 "block_size": 512, 00:03:57.692 "num_blocks": 16384, 00:03:57.692 "uuid": "929fcedc-59c0-4772-a324-5818679a4071", 00:03:57.692 "assigned_rate_limits": { 00:03:57.692 "rw_ios_per_sec": 0, 00:03:57.692 "rw_mbytes_per_sec": 0, 00:03:57.692 "r_mbytes_per_sec": 0, 00:03:57.692 "w_mbytes_per_sec": 0 00:03:57.692 }, 00:03:57.692 "claimed": false, 00:03:57.692 "zoned": false, 00:03:57.692 "supported_io_types": { 00:03:57.692 "read": true, 00:03:57.692 "write": true, 00:03:57.692 "unmap": true, 00:03:57.692 "flush": true, 00:03:57.692 "reset": true, 00:03:57.692 "nvme_admin": false, 00:03:57.692 "nvme_io": false, 00:03:57.692 "nvme_io_md": false, 00:03:57.692 "write_zeroes": true, 00:03:57.692 "zcopy": true, 00:03:57.692 "get_zone_info": false, 00:03:57.692 "zone_management": false, 00:03:57.692 "zone_append": false, 00:03:57.692 "compare": false, 00:03:57.692 "compare_and_write": false, 00:03:57.692 "abort": true, 00:03:57.692 "seek_hole": false, 00:03:57.692 "seek_data": false, 00:03:57.692 "copy": true, 00:03:57.692 "nvme_iov_md": false 00:03:57.692 }, 00:03:57.692 "memory_domains": [ 00:03:57.692 { 00:03:57.692 "dma_device_id": "system", 00:03:57.692 "dma_device_type": 1 00:03:57.692 }, 00:03:57.692 { 00:03:57.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.692 "dma_device_type": 2 00:03:57.692 } 00:03:57.692 ], 00:03:57.692 "driver_specific": {} 00:03:57.692 } 00:03:57.692 ]' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 [2024-11-26 19:41:58.430654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.692 [2024-11-26 19:41:58.430697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.692 [2024-11-26 19:41:58.430712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe44fe0 00:03:57.692 [2024-11-26 19:41:58.430720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.692 [2024-11-26 19:41:58.432226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.692 [2024-11-26 19:41:58.432261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.692 Passthru0 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.692 { 00:03:57.692 "name": "Malloc2", 00:03:57.692 "aliases": [ 00:03:57.692 "929fcedc-59c0-4772-a324-5818679a4071" 00:03:57.692 ], 00:03:57.692 "product_name": "Malloc disk", 00:03:57.692 "block_size": 512, 00:03:57.692 "num_blocks": 16384, 00:03:57.692 "uuid": "929fcedc-59c0-4772-a324-5818679a4071", 00:03:57.692 "assigned_rate_limits": { 00:03:57.692 "rw_ios_per_sec": 0, 00:03:57.692 "rw_mbytes_per_sec": 0, 00:03:57.692 "r_mbytes_per_sec": 0, 00:03:57.692 "w_mbytes_per_sec": 0 00:03:57.692 }, 00:03:57.692 "claimed": true, 00:03:57.692 "claim_type": "exclusive_write", 00:03:57.692 "zoned": false, 00:03:57.692 "supported_io_types": { 00:03:57.692 "read": true, 00:03:57.692 "write": true, 00:03:57.692 "unmap": true, 00:03:57.692 "flush": true, 00:03:57.692 "reset": true, 00:03:57.692 "nvme_admin": false, 00:03:57.692 "nvme_io": false, 00:03:57.692 "nvme_io_md": false, 00:03:57.692 "write_zeroes": true, 00:03:57.692 "zcopy": true, 00:03:57.692 "get_zone_info": false, 00:03:57.692 "zone_management": false, 00:03:57.692 "zone_append": false, 00:03:57.692 "compare": false, 00:03:57.692 "compare_and_write": false, 00:03:57.692 "abort": true, 00:03:57.692 "seek_hole": false, 00:03:57.692 "seek_data": false, 00:03:57.692 "copy": true, 00:03:57.692 "nvme_iov_md": false 00:03:57.692 }, 00:03:57.692 "memory_domains": [ 00:03:57.692 { 00:03:57.692 "dma_device_id": "system", 00:03:57.692 "dma_device_type": 1 00:03:57.692 }, 00:03:57.692 { 00:03:57.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.692 "dma_device_type": 2 00:03:57.692 } 00:03:57.692 ], 00:03:57.692 "driver_specific": {} 00:03:57.692 }, 00:03:57.692 { 00:03:57.692 "name": "Passthru0", 00:03:57.692 "aliases": [ 00:03:57.692 "f9e408b4-4abe-5ba4-af4a-c11a56c8009a" 00:03:57.692 ], 00:03:57.692 "product_name": "passthru", 00:03:57.692 "block_size": 512, 00:03:57.692 "num_blocks": 16384, 00:03:57.692 "uuid": "f9e408b4-4abe-5ba4-af4a-c11a56c8009a", 00:03:57.692 "assigned_rate_limits": { 00:03:57.692 "rw_ios_per_sec": 0, 00:03:57.692 "rw_mbytes_per_sec": 0, 00:03:57.692 "r_mbytes_per_sec": 0, 00:03:57.692 "w_mbytes_per_sec": 0 00:03:57.692 }, 00:03:57.692 "claimed": false, 00:03:57.692 "zoned": false, 00:03:57.692 "supported_io_types": { 00:03:57.692 "read": true, 00:03:57.692 "write": true, 00:03:57.692 "unmap": true, 00:03:57.692 "flush": true, 00:03:57.692 "reset": true, 00:03:57.692 "nvme_admin": false, 00:03:57.692 "nvme_io": false, 00:03:57.692 "nvme_io_md": false, 00:03:57.692 "write_zeroes": true, 00:03:57.692 "zcopy": true, 00:03:57.692 "get_zone_info": false, 00:03:57.692 "zone_management": false, 00:03:57.692 "zone_append": false, 00:03:57.692 "compare": false, 00:03:57.692 "compare_and_write": false, 00:03:57.692 "abort": true, 00:03:57.692 "seek_hole": false, 00:03:57.692 "seek_data": false, 00:03:57.692 "copy": true, 00:03:57.692 "nvme_iov_md": false 00:03:57.692 }, 00:03:57.692 "memory_domains": [ 00:03:57.692 { 00:03:57.692 "dma_device_id": "system", 00:03:57.692 "dma_device_type": 1 00:03:57.692 }, 00:03:57.692 { 00:03:57.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.692 "dma_device_type": 2 00:03:57.692 } 00:03:57.692 ], 00:03:57.692 "driver_specific": { 00:03:57.692 "passthru": { 00:03:57.692 "name": "Passthru0", 00:03:57.692 "base_bdev_name": "Malloc2" 00:03:57.692 } 00:03:57.692 } 00:03:57.692 } 00:03:57.692 ]' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.692 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.953 00:03:57.953 real 0m0.305s 00:03:57.953 user 0m0.197s 00:03:57.953 sys 0m0.038s 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.953 19:41:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.953 ************************************ 00:03:57.953 END TEST rpc_daemon_integrity 00:03:57.953 ************************************ 00:03:57.953 19:41:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:57.953 19:41:58 rpc -- rpc/rpc.sh@84 -- # killprocess 3382923 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 3382923 ']' 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@958 -- # kill -0 3382923 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@959 -- # uname 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382923 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382923' 00:03:57.953 killing process with pid 3382923 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@973 -- # kill 3382923 00:03:57.953 19:41:58 rpc -- common/autotest_common.sh@978 -- # wait 3382923 00:03:58.214 00:03:58.214 real 0m2.669s 00:03:58.214 user 0m3.382s 00:03:58.214 sys 0m0.822s 00:03:58.214 19:41:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.214 19:41:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.214 ************************************ 00:03:58.214 END TEST rpc 00:03:58.214 ************************************ 00:03:58.214 19:41:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:58.215 19:41:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.215 19:41:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.215 19:41:58 -- common/autotest_common.sh@10 -- # set +x 00:03:58.215 ************************************ 00:03:58.215 START TEST skip_rpc 00:03:58.215 ************************************ 00:03:58.215 19:41:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:58.475 * Looking for test storage... 00:03:58.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.475 19:41:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.475 19:41:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.475 19:41:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.475 19:41:59 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:58.475 19:41:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.476 19:41:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.476 19:41:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.476 19:41:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.476 --rc genhtml_branch_coverage=1 00:03:58.476 --rc genhtml_function_coverage=1 00:03:58.476 --rc genhtml_legend=1 00:03:58.476 --rc geninfo_all_blocks=1 00:03:58.476 --rc geninfo_unexecuted_blocks=1 00:03:58.476 00:03:58.476 ' 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.476 --rc genhtml_branch_coverage=1 00:03:58.476 --rc genhtml_function_coverage=1 00:03:58.476 --rc genhtml_legend=1 00:03:58.476 --rc geninfo_all_blocks=1 00:03:58.476 --rc geninfo_unexecuted_blocks=1 00:03:58.476 00:03:58.476 ' 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.476 --rc genhtml_branch_coverage=1 00:03:58.476 --rc genhtml_function_coverage=1 00:03:58.476 --rc genhtml_legend=1 00:03:58.476 --rc geninfo_all_blocks=1 00:03:58.476 --rc geninfo_unexecuted_blocks=1 00:03:58.476 00:03:58.476 ' 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.476 --rc genhtml_branch_coverage=1 00:03:58.476 --rc genhtml_function_coverage=1 00:03:58.476 --rc genhtml_legend=1 00:03:58.476 --rc geninfo_all_blocks=1 00:03:58.476 --rc geninfo_unexecuted_blocks=1 00:03:58.476 00:03:58.476 ' 00:03:58.476 19:41:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.476 19:41:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:58.476 19:41:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.476 19:41:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.476 ************************************ 00:03:58.476 START TEST skip_rpc 00:03:58.476 ************************************ 00:03:58.476 19:41:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:58.476 19:41:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3383846 00:03:58.476 19:41:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.476 19:41:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:58.476 19:41:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:58.735 [2024-11-26 19:41:59.323360] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:03:58.735 [2024-11-26 19:41:59.323423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383846 ] 00:03:58.735 [2024-11-26 19:41:59.398135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.735 [2024-11-26 19:41:59.451214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3383846 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3383846 ']' 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3383846 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383846 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383846' 00:04:04.018 killing process with pid 3383846 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3383846 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3383846 00:04:04.018 00:04:04.018 real 0m5.270s 00:04:04.018 user 0m5.036s 00:04:04.018 sys 0m0.274s 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.018 19:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.018 ************************************ 00:04:04.018 END TEST skip_rpc 00:04:04.018 ************************************ 00:04:04.019 19:42:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.019 19:42:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.019 19:42:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.019 19:42:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.019 ************************************ 00:04:04.019 START TEST skip_rpc_with_json 00:04:04.019 ************************************ 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3385177 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3385177 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3385177 ']' 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.019 19:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.019 [2024-11-26 19:42:04.663784] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:04.019 [2024-11-26 19:42:04.663831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385177 ] 00:04:04.019 [2024-11-26 19:42:04.747825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.019 [2024-11-26 19:42:04.778682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.956 [2024-11-26 19:42:05.463208] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:04.956 request: 00:04:04.956 { 00:04:04.956 "trtype": "tcp", 00:04:04.956 "method": "nvmf_get_transports", 00:04:04.956 "req_id": 1 00:04:04.956 } 00:04:04.956 Got JSON-RPC error response 00:04:04.956 response: 00:04:04.956 { 00:04:04.956 "code": -19, 00:04:04.956 "message": "No such device" 00:04:04.956 } 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.956 [2024-11-26 19:42:05.475312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.956 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.956 { 00:04:04.956 "subsystems": [ 00:04:04.956 { 00:04:04.956 "subsystem": "fsdev", 00:04:04.956 "config": [ 00:04:04.956 { 00:04:04.956 "method": "fsdev_set_opts", 00:04:04.956 "params": { 00:04:04.956 "fsdev_io_pool_size": 65535, 00:04:04.956 "fsdev_io_cache_size": 256 00:04:04.956 } 00:04:04.956 } 00:04:04.956 ] 00:04:04.956 }, 00:04:04.956 { 00:04:04.956 "subsystem": "vfio_user_target", 00:04:04.956 "config": null 00:04:04.956 }, 00:04:04.956 { 00:04:04.956 "subsystem": "keyring", 00:04:04.956 "config": [] 00:04:04.956 }, 00:04:04.956 { 00:04:04.956 "subsystem": "iobuf", 00:04:04.956 "config": [ 00:04:04.956 { 00:04:04.956 "method": "iobuf_set_options", 00:04:04.956 "params": { 00:04:04.956 "small_pool_count": 8192, 00:04:04.956 "large_pool_count": 1024, 00:04:04.956 "small_bufsize": 8192, 00:04:04.956 "large_bufsize": 135168, 00:04:04.956 "enable_numa": false 00:04:04.956 } 00:04:04.956 } 00:04:04.956 ] 00:04:04.956 }, 00:04:04.957 { 00:04:04.957 "subsystem": "sock", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "sock_set_default_impl", 00:04:04.957 "params": { 00:04:04.957 "impl_name": "posix" 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "sock_impl_set_options", 00:04:04.957 "params": { 00:04:04.957 "impl_name": "ssl", 00:04:04.957 "recv_buf_size": 4096, 00:04:04.957 "send_buf_size": 4096, 00:04:04.957 "enable_recv_pipe": true, 00:04:04.957 "enable_quickack": false, 00:04:04.957 "enable_placement_id": 0, 00:04:04.957 "enable_zerocopy_send_server": true, 00:04:04.957 "enable_zerocopy_send_client": false, 00:04:04.957 "zerocopy_threshold": 0, 00:04:04.957 "tls_version": 0, 00:04:04.957 "enable_ktls": false 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "sock_impl_set_options", 00:04:04.957 "params": { 00:04:04.957 "impl_name": "posix", 00:04:04.957 "recv_buf_size": 2097152, 00:04:04.957 "send_buf_size": 2097152, 00:04:04.957 "enable_recv_pipe": true, 00:04:04.957 "enable_quickack": false, 00:04:04.957 "enable_placement_id": 0, 00:04:04.957 "enable_zerocopy_send_server": true, 00:04:04.957 "enable_zerocopy_send_client": false, 00:04:04.957 "zerocopy_threshold": 0, 00:04:04.957 "tls_version": 0, 00:04:04.957 "enable_ktls": false 00:04:04.957 } 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "vmd", 00:04:04.957 "config": [] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "accel", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "accel_set_options", 00:04:04.957 "params": { 00:04:04.957 "small_cache_size": 128, 00:04:04.957 "large_cache_size": 16, 00:04:04.957 "task_count": 2048, 00:04:04.957 "sequence_count": 2048, 00:04:04.957 "buf_count": 2048 00:04:04.957 } 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "bdev", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "bdev_set_options", 00:04:04.957 "params": { 00:04:04.957 "bdev_io_pool_size": 65535, 00:04:04.957 "bdev_io_cache_size": 256, 00:04:04.957 "bdev_auto_examine": true, 00:04:04.957 "iobuf_small_cache_size": 128, 00:04:04.957 "iobuf_large_cache_size": 16 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "bdev_raid_set_options", 00:04:04.957 "params": { 00:04:04.957 "process_window_size_kb": 1024, 00:04:04.957 "process_max_bandwidth_mb_sec": 0 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "bdev_iscsi_set_options", 00:04:04.957 "params": { 00:04:04.957 "timeout_sec": 30 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "bdev_nvme_set_options", 00:04:04.957 "params": { 00:04:04.957 "action_on_timeout": "none", 00:04:04.957 "timeout_us": 0, 00:04:04.957 "timeout_admin_us": 0, 00:04:04.957 "keep_alive_timeout_ms": 10000, 00:04:04.957 "arbitration_burst": 0, 00:04:04.957 "low_priority_weight": 0, 00:04:04.957 "medium_priority_weight": 0, 00:04:04.957 "high_priority_weight": 0, 00:04:04.957 "nvme_adminq_poll_period_us": 10000, 00:04:04.957 "nvme_ioq_poll_period_us": 0, 00:04:04.957 "io_queue_requests": 0, 00:04:04.957 "delay_cmd_submit": true, 00:04:04.957 "transport_retry_count": 4, 00:04:04.957 "bdev_retry_count": 3, 00:04:04.957 "transport_ack_timeout": 0, 00:04:04.957 "ctrlr_loss_timeout_sec": 0, 00:04:04.957 "reconnect_delay_sec": 0, 00:04:04.957 "fast_io_fail_timeout_sec": 0, 00:04:04.957 "disable_auto_failback": false, 00:04:04.957 "generate_uuids": false, 00:04:04.957 "transport_tos": 0, 00:04:04.957 "nvme_error_stat": false, 00:04:04.957 "rdma_srq_size": 0, 00:04:04.957 "io_path_stat": false, 00:04:04.957 "allow_accel_sequence": false, 00:04:04.957 "rdma_max_cq_size": 0, 00:04:04.957 "rdma_cm_event_timeout_ms": 0, 00:04:04.957 "dhchap_digests": [ 00:04:04.957 "sha256", 00:04:04.957 "sha384", 00:04:04.957 "sha512" 00:04:04.957 ], 00:04:04.957 "dhchap_dhgroups": [ 00:04:04.957 "null", 00:04:04.957 "ffdhe2048", 00:04:04.957 "ffdhe3072", 00:04:04.957 "ffdhe4096", 00:04:04.957 "ffdhe6144", 00:04:04.957 "ffdhe8192" 00:04:04.957 ] 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "bdev_nvme_set_hotplug", 00:04:04.957 "params": { 00:04:04.957 "period_us": 100000, 00:04:04.957 "enable": false 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "bdev_wait_for_examine" 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "scsi", 00:04:04.957 "config": null 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "scheduler", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "framework_set_scheduler", 00:04:04.957 "params": { 00:04:04.957 "name": "static" 00:04:04.957 } 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "vhost_scsi", 00:04:04.957 "config": [] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "vhost_blk", 00:04:04.957 "config": [] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "ublk", 00:04:04.957 "config": [] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "nbd", 00:04:04.957 "config": [] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "nvmf", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "nvmf_set_config", 00:04:04.957 "params": { 00:04:04.957 "discovery_filter": "match_any", 00:04:04.957 "admin_cmd_passthru": { 00:04:04.957 "identify_ctrlr": false 00:04:04.957 }, 00:04:04.957 "dhchap_digests": [ 00:04:04.957 "sha256", 00:04:04.957 "sha384", 00:04:04.957 "sha512" 00:04:04.957 ], 00:04:04.957 "dhchap_dhgroups": [ 00:04:04.957 "null", 00:04:04.957 "ffdhe2048", 00:04:04.957 "ffdhe3072", 00:04:04.957 "ffdhe4096", 00:04:04.957 "ffdhe6144", 00:04:04.957 "ffdhe8192" 00:04:04.957 ] 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "nvmf_set_max_subsystems", 00:04:04.957 "params": { 00:04:04.957 "max_subsystems": 1024 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "nvmf_set_crdt", 00:04:04.957 "params": { 00:04:04.957 "crdt1": 0, 00:04:04.957 "crdt2": 0, 00:04:04.957 "crdt3": 0 00:04:04.957 } 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "method": "nvmf_create_transport", 00:04:04.957 "params": { 00:04:04.957 "trtype": "TCP", 00:04:04.957 "max_queue_depth": 128, 00:04:04.957 "max_io_qpairs_per_ctrlr": 127, 00:04:04.957 "in_capsule_data_size": 4096, 00:04:04.957 "max_io_size": 131072, 00:04:04.957 "io_unit_size": 131072, 00:04:04.957 "max_aq_depth": 128, 00:04:04.957 "num_shared_buffers": 511, 00:04:04.957 "buf_cache_size": 4294967295, 00:04:04.957 "dif_insert_or_strip": false, 00:04:04.957 "zcopy": false, 00:04:04.957 "c2h_success": true, 00:04:04.957 "sock_priority": 0, 00:04:04.957 "abort_timeout_sec": 1, 00:04:04.957 "ack_timeout": 0, 00:04:04.957 "data_wr_pool_size": 0 00:04:04.957 } 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 }, 00:04:04.957 { 00:04:04.957 "subsystem": "iscsi", 00:04:04.957 "config": [ 00:04:04.957 { 00:04:04.957 "method": "iscsi_set_options", 00:04:04.957 "params": { 00:04:04.957 "node_base": "iqn.2016-06.io.spdk", 00:04:04.957 "max_sessions": 128, 00:04:04.957 "max_connections_per_session": 2, 00:04:04.957 "max_queue_depth": 64, 00:04:04.957 "default_time2wait": 2, 00:04:04.957 "default_time2retain": 20, 00:04:04.957 "first_burst_length": 8192, 00:04:04.957 "immediate_data": true, 00:04:04.957 "allow_duplicated_isid": false, 00:04:04.957 "error_recovery_level": 0, 00:04:04.957 "nop_timeout": 60, 00:04:04.957 "nop_in_interval": 30, 00:04:04.957 "disable_chap": false, 00:04:04.957 "require_chap": false, 00:04:04.957 "mutual_chap": false, 00:04:04.957 "chap_group": 0, 00:04:04.957 "max_large_datain_per_connection": 64, 00:04:04.957 "max_r2t_per_connection": 4, 00:04:04.957 "pdu_pool_size": 36864, 00:04:04.957 "immediate_data_pool_size": 16384, 00:04:04.957 "data_out_pool_size": 2048 00:04:04.957 } 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 } 00:04:04.957 ] 00:04:04.957 } 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3385177 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3385177 ']' 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3385177 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3385177 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.957 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3385177' 00:04:04.957 killing process with pid 3385177 00:04:04.958 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3385177 00:04:04.958 19:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3385177 00:04:05.218 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3385576 00:04:05.218 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:05.218 19:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3385576 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3385576 ']' 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3385576 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3385576 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3385576' 00:04:10.496 killing process with pid 3385576 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3385576 00:04:10.496 19:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3385576 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.496 00:04:10.496 real 0m6.566s 00:04:10.496 user 0m6.467s 00:04:10.496 sys 0m0.577s 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.496 ************************************ 00:04:10.496 END TEST skip_rpc_with_json 00:04:10.496 ************************************ 00:04:10.496 19:42:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:10.496 19:42:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.496 19:42:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.496 19:42:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.496 ************************************ 00:04:10.496 START TEST skip_rpc_with_delay 00:04:10.496 ************************************ 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:10.496 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:10.756 [2024-11-26 19:42:11.318997] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.756 00:04:10.756 real 0m0.080s 00:04:10.756 user 0m0.046s 00:04:10.756 sys 0m0.033s 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.756 19:42:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:10.756 ************************************ 00:04:10.756 END TEST skip_rpc_with_delay 00:04:10.756 ************************************ 00:04:10.756 19:42:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:10.756 19:42:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:10.756 19:42:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:10.756 19:42:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.756 19:42:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.756 19:42:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.756 ************************************ 00:04:10.756 START TEST exit_on_failed_rpc_init 00:04:10.756 ************************************ 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3387239 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3387239 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3387239 ']' 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.756 19:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.756 [2024-11-26 19:42:11.479460] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:10.756 [2024-11-26 19:42:11.479522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387239 ] 00:04:10.756 [2024-11-26 19:42:11.565140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.016 [2024-11-26 19:42:11.603278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:11.585 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.585 [2024-11-26 19:42:12.330206] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:11.585 [2024-11-26 19:42:12.330258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387578 ] 00:04:11.845 [2024-11-26 19:42:12.417689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.845 [2024-11-26 19:42:12.453524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.845 [2024-11-26 19:42:12.453575] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:11.845 [2024-11-26 19:42:12.453586] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:11.845 [2024-11-26 19:42:12.453593] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:11.845 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:11.845 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.845 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:11.845 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:11.845 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3387239 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3387239 ']' 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3387239 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3387239 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3387239' 00:04:11.846 killing process with pid 3387239 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3387239 00:04:11.846 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3387239 00:04:12.106 00:04:12.106 real 0m1.329s 00:04:12.106 user 0m1.538s 00:04:12.106 sys 0m0.400s 00:04:12.106 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.106 19:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.106 ************************************ 00:04:12.106 END TEST exit_on_failed_rpc_init 00:04:12.106 ************************************ 00:04:12.106 19:42:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.106 00:04:12.106 real 0m13.771s 00:04:12.106 user 0m13.335s 00:04:12.106 sys 0m1.596s 00:04:12.106 19:42:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.106 19:42:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.106 ************************************ 00:04:12.106 END TEST skip_rpc 00:04:12.106 ************************************ 00:04:12.106 19:42:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.106 19:42:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.106 19:42:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.106 19:42:12 -- common/autotest_common.sh@10 -- # set +x 00:04:12.106 ************************************ 00:04:12.106 START TEST rpc_client 00:04:12.106 ************************************ 00:04:12.106 19:42:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.366 * Looking for test storage... 00:04:12.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:12.366 19:42:12 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.366 19:42:12 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.366 19:42:12 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.366 19:42:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.366 --rc genhtml_branch_coverage=1 00:04:12.366 --rc genhtml_function_coverage=1 00:04:12.366 --rc genhtml_legend=1 00:04:12.366 --rc geninfo_all_blocks=1 00:04:12.366 --rc geninfo_unexecuted_blocks=1 00:04:12.366 00:04:12.366 ' 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.366 --rc genhtml_branch_coverage=1 00:04:12.366 --rc genhtml_function_coverage=1 00:04:12.366 --rc genhtml_legend=1 00:04:12.366 --rc geninfo_all_blocks=1 00:04:12.366 --rc geninfo_unexecuted_blocks=1 00:04:12.366 00:04:12.366 ' 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.366 --rc genhtml_branch_coverage=1 00:04:12.366 --rc genhtml_function_coverage=1 00:04:12.366 --rc genhtml_legend=1 00:04:12.366 --rc geninfo_all_blocks=1 00:04:12.366 --rc geninfo_unexecuted_blocks=1 00:04:12.366 00:04:12.366 ' 00:04:12.366 19:42:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.366 --rc genhtml_branch_coverage=1 00:04:12.366 --rc genhtml_function_coverage=1 00:04:12.366 --rc genhtml_legend=1 00:04:12.366 --rc geninfo_all_blocks=1 00:04:12.366 --rc geninfo_unexecuted_blocks=1 00:04:12.366 00:04:12.366 ' 00:04:12.366 19:42:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:12.366 OK 00:04:12.367 19:42:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.367 00:04:12.367 real 0m0.228s 00:04:12.367 user 0m0.127s 00:04:12.367 sys 0m0.116s 00:04:12.367 19:42:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.367 19:42:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.367 ************************************ 00:04:12.367 END TEST rpc_client 00:04:12.367 ************************************ 00:04:12.367 19:42:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.367 19:42:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.367 19:42:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.367 19:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:12.367 ************************************ 00:04:12.367 START TEST json_config 00:04:12.367 ************************************ 00:04:12.367 19:42:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.629 19:42:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.629 19:42:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.629 19:42:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.629 19:42:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.629 19:42:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.629 19:42:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:12.629 19:42:13 json_config -- scripts/common.sh@345 -- # : 1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.629 19:42:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.629 19:42:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@353 -- # local d=1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.629 19:42:13 json_config -- scripts/common.sh@355 -- # echo 1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.629 19:42:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@353 -- # local d=2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.629 19:42:13 json_config -- scripts/common.sh@355 -- # echo 2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.629 19:42:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.629 19:42:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.629 19:42:13 json_config -- scripts/common.sh@368 -- # return 0 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.629 --rc genhtml_branch_coverage=1 00:04:12.629 --rc genhtml_function_coverage=1 00:04:12.629 --rc genhtml_legend=1 00:04:12.629 --rc geninfo_all_blocks=1 00:04:12.629 --rc geninfo_unexecuted_blocks=1 00:04:12.629 00:04:12.629 ' 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.629 --rc genhtml_branch_coverage=1 00:04:12.629 --rc genhtml_function_coverage=1 00:04:12.629 --rc genhtml_legend=1 00:04:12.629 --rc geninfo_all_blocks=1 00:04:12.629 --rc geninfo_unexecuted_blocks=1 00:04:12.629 00:04:12.629 ' 00:04:12.629 19:42:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.629 --rc genhtml_branch_coverage=1 00:04:12.630 --rc genhtml_function_coverage=1 00:04:12.630 --rc genhtml_legend=1 00:04:12.630 --rc geninfo_all_blocks=1 00:04:12.630 --rc geninfo_unexecuted_blocks=1 00:04:12.630 00:04:12.630 ' 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.630 --rc genhtml_branch_coverage=1 00:04:12.630 --rc genhtml_function_coverage=1 00:04:12.630 --rc genhtml_legend=1 00:04:12.630 --rc geninfo_all_blocks=1 00:04:12.630 --rc geninfo_unexecuted_blocks=1 00:04:12.630 00:04:12.630 ' 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:12.630 19:42:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:12.630 19:42:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.630 19:42:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.630 19:42:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.630 19:42:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.630 19:42:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.630 19:42:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.630 19:42:13 json_config -- paths/export.sh@5 -- # export PATH 00:04:12.630 19:42:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@51 -- # : 0 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:12.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:12.630 19:42:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:12.630 INFO: JSON configuration test init 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.630 19:42:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:12.630 19:42:13 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.630 19:42:13 json_config -- json_config/common.sh@10 -- # shift 00:04:12.630 19:42:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.630 19:42:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.630 19:42:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.630 19:42:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.630 19:42:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.630 19:42:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3387944 00:04:12.630 19:42:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.630 Waiting for target to run... 00:04:12.630 19:42:13 json_config -- json_config/common.sh@25 -- # waitforlisten 3387944 /var/tmp/spdk_tgt.sock 00:04:12.630 19:42:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 3387944 ']' 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.630 19:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.893 [2024-11-26 19:42:13.457497] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:12.893 [2024-11-26 19:42:13.457557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387944 ] 00:04:13.153 [2024-11-26 19:42:13.791467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.153 [2024-11-26 19:42:13.823006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:13.724 19:42:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:13.724 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.724 19:42:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:13.724 19:42:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:13.724 19:42:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:14.297 19:42:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.297 19:42:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:14.297 19:42:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:14.297 19:42:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@54 -- # sort 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:14.297 19:42:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.297 19:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:14.297 19:42:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.297 19:42:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:14.297 19:42:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.297 19:42:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:14.558 MallocForNvmf0 00:04:14.558 19:42:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:14.558 19:42:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:14.819 MallocForNvmf1 00:04:14.819 19:42:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:14.819 19:42:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:14.819 [2024-11-26 19:42:15.601370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.819 19:42:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:14.819 19:42:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:15.080 19:42:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.080 19:42:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:15.341 19:42:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.342 19:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:15.604 19:42:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.604 19:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:15.604 [2024-11-26 19:42:16.323558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.604 19:42:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:15.604 19:42:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.604 19:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.604 19:42:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:15.604 19:42:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.604 19:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 19:42:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:15.865 19:42:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:15.865 19:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:15.865 MallocBdevForConfigChangeCheck 00:04:15.865 19:42:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:15.865 19:42:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.865 19:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.865 19:42:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:15.865 19:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.446 19:42:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:16.446 INFO: shutting down applications... 00:04:16.446 19:42:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:16.446 19:42:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:16.446 19:42:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:16.446 19:42:16 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:16.707 Calling clear_iscsi_subsystem 00:04:16.707 Calling clear_nvmf_subsystem 00:04:16.707 Calling clear_nbd_subsystem 00:04:16.707 Calling clear_ublk_subsystem 00:04:16.707 Calling clear_vhost_blk_subsystem 00:04:16.707 Calling clear_vhost_scsi_subsystem 00:04:16.707 Calling clear_bdev_subsystem 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:16.707 19:42:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:16.967 19:42:17 json_config -- json_config/json_config.sh@352 -- # break 00:04:16.967 19:42:17 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:16.967 19:42:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:16.967 19:42:17 json_config -- json_config/common.sh@31 -- # local app=target 00:04:16.967 19:42:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:16.967 19:42:17 json_config -- json_config/common.sh@35 -- # [[ -n 3387944 ]] 00:04:16.967 19:42:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3387944 00:04:16.967 19:42:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:16.967 19:42:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.967 19:42:17 json_config -- json_config/common.sh@41 -- # kill -0 3387944 00:04:16.967 19:42:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:17.539 19:42:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:17.539 19:42:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.539 19:42:18 json_config -- json_config/common.sh@41 -- # kill -0 3387944 00:04:17.539 19:42:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:17.539 19:42:18 json_config -- json_config/common.sh@43 -- # break 00:04:17.539 19:42:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:17.539 19:42:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:17.539 SPDK target shutdown done 00:04:17.539 19:42:18 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:17.539 INFO: relaunching applications... 00:04:17.539 19:42:18 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.539 19:42:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:17.539 19:42:18 json_config -- json_config/common.sh@10 -- # shift 00:04:17.539 19:42:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:17.539 19:42:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:17.539 19:42:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:17.539 19:42:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.539 19:42:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:17.539 19:42:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3389215 00:04:17.539 19:42:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:17.539 Waiting for target to run... 00:04:17.539 19:42:18 json_config -- json_config/common.sh@25 -- # waitforlisten 3389215 /var/tmp/spdk_tgt.sock 00:04:17.539 19:42:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 3389215 ']' 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:17.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.539 19:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.539 [2024-11-26 19:42:18.331342] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:17.539 [2024-11-26 19:42:18.331393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389215 ] 00:04:18.112 [2024-11-26 19:42:18.654266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.112 [2024-11-26 19:42:18.680788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.374 [2024-11-26 19:42:19.180155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.670 [2024-11-26 19:42:19.212626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.670 19:42:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.670 19:42:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:18.670 19:42:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.670 00:04:18.670 19:42:19 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:18.670 19:42:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:18.670 INFO: Checking if target configuration is the same... 00:04:18.670 19:42:19 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.670 19:42:19 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:18.670 19:42:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.670 + '[' 2 -ne 2 ']' 00:04:18.670 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:18.670 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:18.670 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.670 +++ basename /dev/fd/62 00:04:18.670 ++ mktemp /tmp/62.XXX 00:04:18.670 + tmp_file_1=/tmp/62.l4t 00:04:18.670 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.670 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.670 + tmp_file_2=/tmp/spdk_tgt_config.json.fnE 00:04:18.670 + ret=0 00:04:18.670 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.930 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:18.930 + diff -u /tmp/62.l4t /tmp/spdk_tgt_config.json.fnE 00:04:18.930 + echo 'INFO: JSON config files are the same' 00:04:18.930 INFO: JSON config files are the same 00:04:18.930 + rm /tmp/62.l4t /tmp/spdk_tgt_config.json.fnE 00:04:18.930 + exit 0 00:04:18.930 19:42:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:18.930 19:42:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:18.930 INFO: changing configuration and checking if this can be detected... 00:04:18.930 19:42:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.930 19:42:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.190 19:42:19 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.190 19:42:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:19.190 19:42:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.190 + '[' 2 -ne 2 ']' 00:04:19.190 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.190 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.190 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.190 +++ basename /dev/fd/62 00:04:19.190 ++ mktemp /tmp/62.XXX 00:04:19.190 + tmp_file_1=/tmp/62.D0m 00:04:19.190 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.190 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.190 + tmp_file_2=/tmp/spdk_tgt_config.json.fgr 00:04:19.190 + ret=0 00:04:19.190 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.450 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.450 + diff -u /tmp/62.D0m /tmp/spdk_tgt_config.json.fgr 00:04:19.450 + ret=1 00:04:19.450 + echo '=== Start of file: /tmp/62.D0m ===' 00:04:19.450 + cat /tmp/62.D0m 00:04:19.451 + echo '=== End of file: /tmp/62.D0m ===' 00:04:19.451 + echo '' 00:04:19.451 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fgr ===' 00:04:19.451 + cat /tmp/spdk_tgt_config.json.fgr 00:04:19.451 + echo '=== End of file: /tmp/spdk_tgt_config.json.fgr ===' 00:04:19.451 + echo '' 00:04:19.451 + rm /tmp/62.D0m /tmp/spdk_tgt_config.json.fgr 00:04:19.451 + exit 1 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:19.451 INFO: configuration change detected. 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@324 -- # [[ -n 3389215 ]] 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:19.451 19:42:20 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.451 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.712 19:42:20 json_config -- json_config/json_config.sh@330 -- # killprocess 3389215 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@954 -- # '[' -z 3389215 ']' 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@958 -- # kill -0 3389215 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@959 -- # uname 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3389215 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3389215' 00:04:19.712 killing process with pid 3389215 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@973 -- # kill 3389215 00:04:19.712 19:42:20 json_config -- common/autotest_common.sh@978 -- # wait 3389215 00:04:19.973 19:42:20 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.973 19:42:20 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:19.973 19:42:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.973 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.973 19:42:20 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:19.973 19:42:20 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:19.973 INFO: Success 00:04:19.973 00:04:19.973 real 0m7.486s 00:04:19.973 user 0m9.077s 00:04:19.973 sys 0m2.016s 00:04:19.973 19:42:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.973 19:42:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.973 ************************************ 00:04:19.973 END TEST json_config 00:04:19.973 ************************************ 00:04:19.973 19:42:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:19.973 19:42:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.973 19:42:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.973 19:42:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.973 ************************************ 00:04:19.973 START TEST json_config_extra_key 00:04:19.973 ************************************ 00:04:19.973 19:42:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.235 19:42:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.235 --rc genhtml_branch_coverage=1 00:04:20.235 --rc genhtml_function_coverage=1 00:04:20.235 --rc genhtml_legend=1 00:04:20.235 --rc geninfo_all_blocks=1 00:04:20.235 --rc geninfo_unexecuted_blocks=1 00:04:20.235 00:04:20.235 ' 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.235 --rc genhtml_branch_coverage=1 00:04:20.235 --rc genhtml_function_coverage=1 00:04:20.235 --rc genhtml_legend=1 00:04:20.235 --rc geninfo_all_blocks=1 00:04:20.235 --rc geninfo_unexecuted_blocks=1 00:04:20.235 00:04:20.235 ' 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.235 --rc genhtml_branch_coverage=1 00:04:20.235 --rc genhtml_function_coverage=1 00:04:20.235 --rc genhtml_legend=1 00:04:20.235 --rc geninfo_all_blocks=1 00:04:20.235 --rc geninfo_unexecuted_blocks=1 00:04:20.235 00:04:20.235 ' 00:04:20.235 19:42:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.235 --rc genhtml_branch_coverage=1 00:04:20.235 --rc genhtml_function_coverage=1 00:04:20.235 --rc genhtml_legend=1 00:04:20.235 --rc geninfo_all_blocks=1 00:04:20.235 --rc geninfo_unexecuted_blocks=1 00:04:20.235 00:04:20.235 ' 00:04:20.235 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.235 19:42:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:20.235 19:42:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.235 19:42:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.235 19:42:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.235 19:42:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.236 19:42:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.236 19:42:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.236 19:42:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.236 19:42:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.236 19:42:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.236 19:42:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.236 19:42:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.236 19:42:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:20.236 19:42:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.236 19:42:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:20.236 INFO: launching applications... 00:04:20.236 19:42:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3390080 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.236 Waiting for target to run... 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3390080 /var/tmp/spdk_tgt.sock 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3390080 ']' 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.236 19:42:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.236 19:42:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:20.236 [2024-11-26 19:42:21.017214] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:20.236 [2024-11-26 19:42:21.017286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390080 ] 00:04:20.497 [2024-11-26 19:42:21.293491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.757 [2024-11-26 19:42:21.321192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.017 19:42:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.017 19:42:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:21.017 00:04:21.017 19:42:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:21.017 INFO: shutting down applications... 00:04:21.017 19:42:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3390080 ]] 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3390080 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3390080 00:04:21.017 19:42:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3390080 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:21.587 19:42:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:21.587 SPDK target shutdown done 00:04:21.587 19:42:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:21.587 Success 00:04:21.587 00:04:21.587 real 0m1.568s 00:04:21.587 user 0m1.202s 00:04:21.587 sys 0m0.390s 00:04:21.587 19:42:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.587 19:42:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.587 ************************************ 00:04:21.587 END TEST json_config_extra_key 00:04:21.587 ************************************ 00:04:21.587 19:42:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.587 19:42:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.587 19:42:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.587 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:21.587 ************************************ 00:04:21.587 START TEST alias_rpc 00:04:21.587 ************************************ 00:04:21.587 19:42:22 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:21.848 * Looking for test storage... 00:04:21.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.848 19:42:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.848 --rc genhtml_branch_coverage=1 00:04:21.848 --rc genhtml_function_coverage=1 00:04:21.848 --rc genhtml_legend=1 00:04:21.848 --rc geninfo_all_blocks=1 00:04:21.848 --rc geninfo_unexecuted_blocks=1 00:04:21.848 00:04:21.848 ' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.848 --rc genhtml_branch_coverage=1 00:04:21.848 --rc genhtml_function_coverage=1 00:04:21.848 --rc genhtml_legend=1 00:04:21.848 --rc geninfo_all_blocks=1 00:04:21.848 --rc geninfo_unexecuted_blocks=1 00:04:21.848 00:04:21.848 ' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.848 --rc genhtml_branch_coverage=1 00:04:21.848 --rc genhtml_function_coverage=1 00:04:21.848 --rc genhtml_legend=1 00:04:21.848 --rc geninfo_all_blocks=1 00:04:21.848 --rc geninfo_unexecuted_blocks=1 00:04:21.848 00:04:21.848 ' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.848 --rc genhtml_branch_coverage=1 00:04:21.848 --rc genhtml_function_coverage=1 00:04:21.848 --rc genhtml_legend=1 00:04:21.848 --rc geninfo_all_blocks=1 00:04:21.848 --rc geninfo_unexecuted_blocks=1 00:04:21.848 00:04:21.848 ' 00:04:21.848 19:42:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:21.848 19:42:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3390465 00:04:21.848 19:42:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3390465 00:04:21.848 19:42:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3390465 ']' 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.848 19:42:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.848 [2024-11-26 19:42:22.653173] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:21.848 [2024-11-26 19:42:22.653243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390465 ] 00:04:22.109 [2024-11-26 19:42:22.739876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.109 [2024-11-26 19:42:22.779830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.690 19:42:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.690 19:42:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.690 19:42:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:22.952 19:42:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3390465 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3390465 ']' 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3390465 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3390465 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.952 19:42:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.953 19:42:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3390465' 00:04:22.953 killing process with pid 3390465 00:04:22.953 19:42:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 3390465 00:04:22.953 19:42:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 3390465 00:04:23.214 00:04:23.214 real 0m1.502s 00:04:23.214 user 0m1.652s 00:04:23.214 sys 0m0.413s 00:04:23.214 19:42:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.214 19:42:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.214 ************************************ 00:04:23.214 END TEST alias_rpc 00:04:23.214 ************************************ 00:04:23.214 19:42:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:23.214 19:42:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:23.214 19:42:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.214 19:42:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.214 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:23.214 ************************************ 00:04:23.214 START TEST spdkcli_tcp 00:04:23.214 ************************************ 00:04:23.214 19:42:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:23.475 * Looking for test storage... 00:04:23.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.475 19:42:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.475 --rc genhtml_branch_coverage=1 00:04:23.475 --rc genhtml_function_coverage=1 00:04:23.475 --rc genhtml_legend=1 00:04:23.475 --rc geninfo_all_blocks=1 00:04:23.475 --rc geninfo_unexecuted_blocks=1 00:04:23.475 00:04:23.475 ' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.475 --rc genhtml_branch_coverage=1 00:04:23.475 --rc genhtml_function_coverage=1 00:04:23.475 --rc genhtml_legend=1 00:04:23.475 --rc geninfo_all_blocks=1 00:04:23.475 --rc geninfo_unexecuted_blocks=1 00:04:23.475 00:04:23.475 ' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.475 --rc genhtml_branch_coverage=1 00:04:23.475 --rc genhtml_function_coverage=1 00:04:23.475 --rc genhtml_legend=1 00:04:23.475 --rc geninfo_all_blocks=1 00:04:23.475 --rc geninfo_unexecuted_blocks=1 00:04:23.475 00:04:23.475 ' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.475 --rc genhtml_branch_coverage=1 00:04:23.475 --rc genhtml_function_coverage=1 00:04:23.475 --rc genhtml_legend=1 00:04:23.475 --rc geninfo_all_blocks=1 00:04:23.475 --rc geninfo_unexecuted_blocks=1 00:04:23.475 00:04:23.475 ' 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3390901 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3390901 00:04:23.475 19:42:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3390901 ']' 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.475 19:42:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.475 [2024-11-26 19:42:24.246577] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:23.475 [2024-11-26 19:42:24.246642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390901 ] 00:04:23.735 [2024-11-26 19:42:24.334222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.735 [2024-11-26 19:42:24.376447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.735 [2024-11-26 19:42:24.376538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.306 19:42:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.306 19:42:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:24.306 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3391086 00:04:24.306 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.306 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:24.566 [ 00:04:24.566 "bdev_malloc_delete", 00:04:24.566 "bdev_malloc_create", 00:04:24.566 "bdev_null_resize", 00:04:24.567 "bdev_null_delete", 00:04:24.567 "bdev_null_create", 00:04:24.567 "bdev_nvme_cuse_unregister", 00:04:24.567 "bdev_nvme_cuse_register", 00:04:24.567 "bdev_opal_new_user", 00:04:24.567 "bdev_opal_set_lock_state", 00:04:24.567 "bdev_opal_delete", 00:04:24.567 "bdev_opal_get_info", 00:04:24.567 "bdev_opal_create", 00:04:24.567 "bdev_nvme_opal_revert", 00:04:24.567 "bdev_nvme_opal_init", 00:04:24.567 "bdev_nvme_send_cmd", 00:04:24.567 "bdev_nvme_set_keys", 00:04:24.567 "bdev_nvme_get_path_iostat", 00:04:24.567 "bdev_nvme_get_mdns_discovery_info", 00:04:24.567 "bdev_nvme_stop_mdns_discovery", 00:04:24.567 "bdev_nvme_start_mdns_discovery", 00:04:24.567 "bdev_nvme_set_multipath_policy", 00:04:24.567 "bdev_nvme_set_preferred_path", 00:04:24.567 "bdev_nvme_get_io_paths", 00:04:24.567 "bdev_nvme_remove_error_injection", 00:04:24.567 "bdev_nvme_add_error_injection", 00:04:24.567 "bdev_nvme_get_discovery_info", 00:04:24.567 "bdev_nvme_stop_discovery", 00:04:24.567 "bdev_nvme_start_discovery", 00:04:24.567 "bdev_nvme_get_controller_health_info", 00:04:24.567 "bdev_nvme_disable_controller", 00:04:24.567 "bdev_nvme_enable_controller", 00:04:24.567 "bdev_nvme_reset_controller", 00:04:24.567 "bdev_nvme_get_transport_statistics", 00:04:24.567 "bdev_nvme_apply_firmware", 00:04:24.567 "bdev_nvme_detach_controller", 00:04:24.567 "bdev_nvme_get_controllers", 00:04:24.567 "bdev_nvme_attach_controller", 00:04:24.567 "bdev_nvme_set_hotplug", 00:04:24.567 "bdev_nvme_set_options", 00:04:24.567 "bdev_passthru_delete", 00:04:24.567 "bdev_passthru_create", 00:04:24.567 "bdev_lvol_set_parent_bdev", 00:04:24.567 "bdev_lvol_set_parent", 00:04:24.567 "bdev_lvol_check_shallow_copy", 00:04:24.567 "bdev_lvol_start_shallow_copy", 00:04:24.567 "bdev_lvol_grow_lvstore", 00:04:24.567 "bdev_lvol_get_lvols", 00:04:24.567 "bdev_lvol_get_lvstores", 00:04:24.567 "bdev_lvol_delete", 00:04:24.567 "bdev_lvol_set_read_only", 00:04:24.567 "bdev_lvol_resize", 00:04:24.567 "bdev_lvol_decouple_parent", 00:04:24.567 "bdev_lvol_inflate", 00:04:24.567 "bdev_lvol_rename", 00:04:24.567 "bdev_lvol_clone_bdev", 00:04:24.567 "bdev_lvol_clone", 00:04:24.567 "bdev_lvol_snapshot", 00:04:24.567 "bdev_lvol_create", 00:04:24.567 "bdev_lvol_delete_lvstore", 00:04:24.567 "bdev_lvol_rename_lvstore", 00:04:24.567 "bdev_lvol_create_lvstore", 00:04:24.567 "bdev_raid_set_options", 00:04:24.567 "bdev_raid_remove_base_bdev", 00:04:24.567 "bdev_raid_add_base_bdev", 00:04:24.567 "bdev_raid_delete", 00:04:24.567 "bdev_raid_create", 00:04:24.567 "bdev_raid_get_bdevs", 00:04:24.567 "bdev_error_inject_error", 00:04:24.567 "bdev_error_delete", 00:04:24.567 "bdev_error_create", 00:04:24.567 "bdev_split_delete", 00:04:24.567 "bdev_split_create", 00:04:24.567 "bdev_delay_delete", 00:04:24.567 "bdev_delay_create", 00:04:24.567 "bdev_delay_update_latency", 00:04:24.567 "bdev_zone_block_delete", 00:04:24.567 "bdev_zone_block_create", 00:04:24.567 "blobfs_create", 00:04:24.567 "blobfs_detect", 00:04:24.567 "blobfs_set_cache_size", 00:04:24.567 "bdev_aio_delete", 00:04:24.567 "bdev_aio_rescan", 00:04:24.567 "bdev_aio_create", 00:04:24.567 "bdev_ftl_set_property", 00:04:24.567 "bdev_ftl_get_properties", 00:04:24.567 "bdev_ftl_get_stats", 00:04:24.567 "bdev_ftl_unmap", 00:04:24.567 "bdev_ftl_unload", 00:04:24.567 "bdev_ftl_delete", 00:04:24.567 "bdev_ftl_load", 00:04:24.567 "bdev_ftl_create", 00:04:24.567 "bdev_virtio_attach_controller", 00:04:24.567 "bdev_virtio_scsi_get_devices", 00:04:24.567 "bdev_virtio_detach_controller", 00:04:24.567 "bdev_virtio_blk_set_hotplug", 00:04:24.567 "bdev_iscsi_delete", 00:04:24.567 "bdev_iscsi_create", 00:04:24.567 "bdev_iscsi_set_options", 00:04:24.567 "accel_error_inject_error", 00:04:24.567 "ioat_scan_accel_module", 00:04:24.567 "dsa_scan_accel_module", 00:04:24.567 "iaa_scan_accel_module", 00:04:24.567 "vfu_virtio_create_fs_endpoint", 00:04:24.567 "vfu_virtio_create_scsi_endpoint", 00:04:24.567 "vfu_virtio_scsi_remove_target", 00:04:24.567 "vfu_virtio_scsi_add_target", 00:04:24.567 "vfu_virtio_create_blk_endpoint", 00:04:24.567 "vfu_virtio_delete_endpoint", 00:04:24.567 "keyring_file_remove_key", 00:04:24.567 "keyring_file_add_key", 00:04:24.567 "keyring_linux_set_options", 00:04:24.567 "fsdev_aio_delete", 00:04:24.567 "fsdev_aio_create", 00:04:24.567 "iscsi_get_histogram", 00:04:24.567 "iscsi_enable_histogram", 00:04:24.567 "iscsi_set_options", 00:04:24.567 "iscsi_get_auth_groups", 00:04:24.567 "iscsi_auth_group_remove_secret", 00:04:24.567 "iscsi_auth_group_add_secret", 00:04:24.567 "iscsi_delete_auth_group", 00:04:24.567 "iscsi_create_auth_group", 00:04:24.567 "iscsi_set_discovery_auth", 00:04:24.567 "iscsi_get_options", 00:04:24.567 "iscsi_target_node_request_logout", 00:04:24.567 "iscsi_target_node_set_redirect", 00:04:24.567 "iscsi_target_node_set_auth", 00:04:24.567 "iscsi_target_node_add_lun", 00:04:24.567 "iscsi_get_stats", 00:04:24.567 "iscsi_get_connections", 00:04:24.567 "iscsi_portal_group_set_auth", 00:04:24.567 "iscsi_start_portal_group", 00:04:24.567 "iscsi_delete_portal_group", 00:04:24.567 "iscsi_create_portal_group", 00:04:24.567 "iscsi_get_portal_groups", 00:04:24.567 "iscsi_delete_target_node", 00:04:24.567 "iscsi_target_node_remove_pg_ig_maps", 00:04:24.567 "iscsi_target_node_add_pg_ig_maps", 00:04:24.567 "iscsi_create_target_node", 00:04:24.567 "iscsi_get_target_nodes", 00:04:24.567 "iscsi_delete_initiator_group", 00:04:24.567 "iscsi_initiator_group_remove_initiators", 00:04:24.567 "iscsi_initiator_group_add_initiators", 00:04:24.567 "iscsi_create_initiator_group", 00:04:24.567 "iscsi_get_initiator_groups", 00:04:24.567 "nvmf_set_crdt", 00:04:24.567 "nvmf_set_config", 00:04:24.567 "nvmf_set_max_subsystems", 00:04:24.567 "nvmf_stop_mdns_prr", 00:04:24.567 "nvmf_publish_mdns_prr", 00:04:24.567 "nvmf_subsystem_get_listeners", 00:04:24.567 "nvmf_subsystem_get_qpairs", 00:04:24.567 "nvmf_subsystem_get_controllers", 00:04:24.567 "nvmf_get_stats", 00:04:24.567 "nvmf_get_transports", 00:04:24.567 "nvmf_create_transport", 00:04:24.567 "nvmf_get_targets", 00:04:24.567 "nvmf_delete_target", 00:04:24.567 "nvmf_create_target", 00:04:24.567 "nvmf_subsystem_allow_any_host", 00:04:24.567 "nvmf_subsystem_set_keys", 00:04:24.567 "nvmf_subsystem_remove_host", 00:04:24.567 "nvmf_subsystem_add_host", 00:04:24.567 "nvmf_ns_remove_host", 00:04:24.567 "nvmf_ns_add_host", 00:04:24.567 "nvmf_subsystem_remove_ns", 00:04:24.567 "nvmf_subsystem_set_ns_ana_group", 00:04:24.567 "nvmf_subsystem_add_ns", 00:04:24.567 "nvmf_subsystem_listener_set_ana_state", 00:04:24.567 "nvmf_discovery_get_referrals", 00:04:24.567 "nvmf_discovery_remove_referral", 00:04:24.567 "nvmf_discovery_add_referral", 00:04:24.567 "nvmf_subsystem_remove_listener", 00:04:24.567 "nvmf_subsystem_add_listener", 00:04:24.567 "nvmf_delete_subsystem", 00:04:24.567 "nvmf_create_subsystem", 00:04:24.567 "nvmf_get_subsystems", 00:04:24.567 "env_dpdk_get_mem_stats", 00:04:24.567 "nbd_get_disks", 00:04:24.567 "nbd_stop_disk", 00:04:24.567 "nbd_start_disk", 00:04:24.567 "ublk_recover_disk", 00:04:24.567 "ublk_get_disks", 00:04:24.567 "ublk_stop_disk", 00:04:24.567 "ublk_start_disk", 00:04:24.567 "ublk_destroy_target", 00:04:24.567 "ublk_create_target", 00:04:24.567 "virtio_blk_create_transport", 00:04:24.567 "virtio_blk_get_transports", 00:04:24.567 "vhost_controller_set_coalescing", 00:04:24.567 "vhost_get_controllers", 00:04:24.567 "vhost_delete_controller", 00:04:24.567 "vhost_create_blk_controller", 00:04:24.567 "vhost_scsi_controller_remove_target", 00:04:24.567 "vhost_scsi_controller_add_target", 00:04:24.567 "vhost_start_scsi_controller", 00:04:24.567 "vhost_create_scsi_controller", 00:04:24.567 "thread_set_cpumask", 00:04:24.567 "scheduler_set_options", 00:04:24.567 "framework_get_governor", 00:04:24.567 "framework_get_scheduler", 00:04:24.567 "framework_set_scheduler", 00:04:24.567 "framework_get_reactors", 00:04:24.567 "thread_get_io_channels", 00:04:24.567 "thread_get_pollers", 00:04:24.567 "thread_get_stats", 00:04:24.567 "framework_monitor_context_switch", 00:04:24.567 "spdk_kill_instance", 00:04:24.567 "log_enable_timestamps", 00:04:24.567 "log_get_flags", 00:04:24.567 "log_clear_flag", 00:04:24.567 "log_set_flag", 00:04:24.567 "log_get_level", 00:04:24.567 "log_set_level", 00:04:24.567 "log_get_print_level", 00:04:24.567 "log_set_print_level", 00:04:24.567 "framework_enable_cpumask_locks", 00:04:24.567 "framework_disable_cpumask_locks", 00:04:24.567 "framework_wait_init", 00:04:24.567 "framework_start_init", 00:04:24.567 "scsi_get_devices", 00:04:24.567 "bdev_get_histogram", 00:04:24.568 "bdev_enable_histogram", 00:04:24.568 "bdev_set_qos_limit", 00:04:24.568 "bdev_set_qd_sampling_period", 00:04:24.568 "bdev_get_bdevs", 00:04:24.568 "bdev_reset_iostat", 00:04:24.568 "bdev_get_iostat", 00:04:24.568 "bdev_examine", 00:04:24.568 "bdev_wait_for_examine", 00:04:24.568 "bdev_set_options", 00:04:24.568 "accel_get_stats", 00:04:24.568 "accel_set_options", 00:04:24.568 "accel_set_driver", 00:04:24.568 "accel_crypto_key_destroy", 00:04:24.568 "accel_crypto_keys_get", 00:04:24.568 "accel_crypto_key_create", 00:04:24.568 "accel_assign_opc", 00:04:24.568 "accel_get_module_info", 00:04:24.568 "accel_get_opc_assignments", 00:04:24.568 "vmd_rescan", 00:04:24.568 "vmd_remove_device", 00:04:24.568 "vmd_enable", 00:04:24.568 "sock_get_default_impl", 00:04:24.568 "sock_set_default_impl", 00:04:24.568 "sock_impl_set_options", 00:04:24.568 "sock_impl_get_options", 00:04:24.568 "iobuf_get_stats", 00:04:24.568 "iobuf_set_options", 00:04:24.568 "keyring_get_keys", 00:04:24.568 "vfu_tgt_set_base_path", 00:04:24.568 "framework_get_pci_devices", 00:04:24.568 "framework_get_config", 00:04:24.568 "framework_get_subsystems", 00:04:24.568 "fsdev_set_opts", 00:04:24.568 "fsdev_get_opts", 00:04:24.568 "trace_get_info", 00:04:24.568 "trace_get_tpoint_group_mask", 00:04:24.568 "trace_disable_tpoint_group", 00:04:24.568 "trace_enable_tpoint_group", 00:04:24.568 "trace_clear_tpoint_mask", 00:04:24.568 "trace_set_tpoint_mask", 00:04:24.568 "notify_get_notifications", 00:04:24.568 "notify_get_types", 00:04:24.568 "spdk_get_version", 00:04:24.568 "rpc_get_methods" 00:04:24.568 ] 00:04:24.568 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.568 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:24.568 19:42:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3390901 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3390901 ']' 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3390901 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3390901 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3390901' 00:04:24.568 killing process with pid 3390901 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3390901 00:04:24.568 19:42:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3390901 00:04:24.829 00:04:24.829 real 0m1.545s 00:04:24.829 user 0m2.816s 00:04:24.829 sys 0m0.469s 00:04:24.829 19:42:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.829 19:42:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.829 ************************************ 00:04:24.829 END TEST spdkcli_tcp 00:04:24.829 ************************************ 00:04:24.829 19:42:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.829 19:42:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.829 19:42:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.829 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.829 ************************************ 00:04:24.829 START TEST dpdk_mem_utility 00:04:24.829 ************************************ 00:04:24.829 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:25.090 * Looking for test storage... 00:04:25.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.090 19:42:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.090 --rc genhtml_branch_coverage=1 00:04:25.090 --rc genhtml_function_coverage=1 00:04:25.090 --rc genhtml_legend=1 00:04:25.090 --rc geninfo_all_blocks=1 00:04:25.090 --rc geninfo_unexecuted_blocks=1 00:04:25.090 00:04:25.090 ' 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.090 --rc genhtml_branch_coverage=1 00:04:25.090 --rc genhtml_function_coverage=1 00:04:25.090 --rc genhtml_legend=1 00:04:25.090 --rc geninfo_all_blocks=1 00:04:25.090 --rc geninfo_unexecuted_blocks=1 00:04:25.090 00:04:25.090 ' 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.090 --rc genhtml_branch_coverage=1 00:04:25.090 --rc genhtml_function_coverage=1 00:04:25.090 --rc genhtml_legend=1 00:04:25.090 --rc geninfo_all_blocks=1 00:04:25.090 --rc geninfo_unexecuted_blocks=1 00:04:25.090 00:04:25.090 ' 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.090 --rc genhtml_branch_coverage=1 00:04:25.090 --rc genhtml_function_coverage=1 00:04:25.090 --rc genhtml_legend=1 00:04:25.090 --rc geninfo_all_blocks=1 00:04:25.090 --rc geninfo_unexecuted_blocks=1 00:04:25.090 00:04:25.090 ' 00:04:25.090 19:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.090 19:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3391286 00:04:25.090 19:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3391286 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3391286 ']' 00:04:25.090 19:42:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.090 19:42:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.090 [2024-11-26 19:42:25.853512] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:25.090 [2024-11-26 19:42:25.853578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391286 ] 00:04:25.352 [2024-11-26 19:42:25.942858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.352 [2024-11-26 19:42:25.978260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.924 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.924 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:25.924 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:25.924 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:25.924 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.924 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.924 { 00:04:25.924 "filename": "/tmp/spdk_mem_dump.txt" 00:04:25.924 } 00:04:25.924 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.924 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:25.924 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:25.924 1 heaps totaling size 818.000000 MiB 00:04:25.924 size: 818.000000 MiB heap id: 0 00:04:25.924 end heaps---------- 00:04:25.924 9 mempools totaling size 603.782043 MiB 00:04:25.924 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:25.924 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:25.924 size: 100.555481 MiB name: bdev_io_3391286 00:04:25.924 size: 50.003479 MiB name: msgpool_3391286 00:04:25.924 size: 36.509338 MiB name: fsdev_io_3391286 00:04:25.924 size: 21.763794 MiB name: PDU_Pool 00:04:25.924 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:25.924 size: 4.133484 MiB name: evtpool_3391286 00:04:25.924 size: 0.026123 MiB name: Session_Pool 00:04:25.924 end mempools------- 00:04:25.924 6 memzones totaling size 4.142822 MiB 00:04:25.924 size: 1.000366 MiB name: RG_ring_0_3391286 00:04:25.924 size: 1.000366 MiB name: RG_ring_1_3391286 00:04:25.924 size: 1.000366 MiB name: RG_ring_4_3391286 00:04:25.924 size: 1.000366 MiB name: RG_ring_5_3391286 00:04:25.924 size: 0.125366 MiB name: RG_ring_2_3391286 00:04:25.924 size: 0.015991 MiB name: RG_ring_3_3391286 00:04:25.924 end memzones------- 00:04:25.924 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:25.924 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:25.924 list of free elements. size: 10.852478 MiB 00:04:25.924 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:25.924 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:25.924 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:25.924 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:25.924 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:25.924 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:25.924 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:25.924 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:25.924 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:25.924 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:25.924 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:25.924 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:25.924 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:25.924 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:25.924 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:25.924 list of standard malloc elements. size: 199.218628 MiB 00:04:25.924 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:25.924 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:25.924 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:25.924 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:25.924 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:25.924 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:25.924 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:25.924 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:25.924 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:25.924 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:25.924 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:25.924 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:25.924 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:25.924 list of memzone associated elements. size: 607.928894 MiB 00:04:25.924 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:25.924 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:25.924 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:25.924 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:25.924 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:25.924 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3391286_0 00:04:25.924 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:25.924 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3391286_0 00:04:25.924 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:25.924 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3391286_0 00:04:25.924 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:25.924 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:25.924 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:25.924 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:25.924 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:25.924 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3391286_0 00:04:25.924 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:25.925 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3391286 00:04:25.925 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:25.925 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3391286 00:04:25.925 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:25.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:25.925 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:25.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:25.925 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:25.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:25.925 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:25.925 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:25.925 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:25.925 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3391286 00:04:25.925 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:25.925 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3391286 00:04:25.925 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:25.925 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3391286 00:04:25.925 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:25.925 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3391286 00:04:25.925 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:25.925 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3391286 00:04:25.925 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:25.925 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3391286 00:04:25.925 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:25.925 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:25.925 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:25.925 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:25.925 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:25.925 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:25.925 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:25.925 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3391286 00:04:25.925 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:25.925 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3391286 00:04:25.925 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:25.925 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:25.925 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:25.925 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:25.925 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:25.925 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3391286 00:04:25.925 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:25.925 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:25.925 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:25.925 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3391286 00:04:25.925 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:25.925 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3391286 00:04:25.925 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:25.925 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3391286 00:04:25.925 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:25.925 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:25.925 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:25.925 19:42:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3391286 00:04:25.925 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3391286 ']' 00:04:25.925 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3391286 00:04:25.925 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:25.925 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391286 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391286' 00:04:26.184 killing process with pid 3391286 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3391286 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3391286 00:04:26.184 00:04:26.184 real 0m1.394s 00:04:26.184 user 0m1.444s 00:04:26.184 sys 0m0.426s 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.184 19:42:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.184 ************************************ 00:04:26.184 END TEST dpdk_mem_utility 00:04:26.184 ************************************ 00:04:26.445 19:42:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.446 19:42:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.446 19:42:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.446 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:26.446 ************************************ 00:04:26.446 START TEST event 00:04:26.446 ************************************ 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:26.446 * Looking for test storage... 00:04:26.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.446 19:42:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.446 19:42:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.446 19:42:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.446 19:42:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.446 19:42:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.446 19:42:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.446 19:42:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.446 19:42:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.446 19:42:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.446 19:42:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.446 19:42:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.446 19:42:27 event -- scripts/common.sh@344 -- # case "$op" in 00:04:26.446 19:42:27 event -- scripts/common.sh@345 -- # : 1 00:04:26.446 19:42:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.446 19:42:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.446 19:42:27 event -- scripts/common.sh@365 -- # decimal 1 00:04:26.446 19:42:27 event -- scripts/common.sh@353 -- # local d=1 00:04:26.446 19:42:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.446 19:42:27 event -- scripts/common.sh@355 -- # echo 1 00:04:26.446 19:42:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.446 19:42:27 event -- scripts/common.sh@366 -- # decimal 2 00:04:26.446 19:42:27 event -- scripts/common.sh@353 -- # local d=2 00:04:26.446 19:42:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.446 19:42:27 event -- scripts/common.sh@355 -- # echo 2 00:04:26.446 19:42:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.446 19:42:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.446 19:42:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.446 19:42:27 event -- scripts/common.sh@368 -- # return 0 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.446 --rc genhtml_branch_coverage=1 00:04:26.446 --rc genhtml_function_coverage=1 00:04:26.446 --rc genhtml_legend=1 00:04:26.446 --rc geninfo_all_blocks=1 00:04:26.446 --rc geninfo_unexecuted_blocks=1 00:04:26.446 00:04:26.446 ' 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.446 --rc genhtml_branch_coverage=1 00:04:26.446 --rc genhtml_function_coverage=1 00:04:26.446 --rc genhtml_legend=1 00:04:26.446 --rc geninfo_all_blocks=1 00:04:26.446 --rc geninfo_unexecuted_blocks=1 00:04:26.446 00:04:26.446 ' 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.446 --rc genhtml_branch_coverage=1 00:04:26.446 --rc genhtml_function_coverage=1 00:04:26.446 --rc genhtml_legend=1 00:04:26.446 --rc geninfo_all_blocks=1 00:04:26.446 --rc geninfo_unexecuted_blocks=1 00:04:26.446 00:04:26.446 ' 00:04:26.446 19:42:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.446 --rc genhtml_branch_coverage=1 00:04:26.446 --rc genhtml_function_coverage=1 00:04:26.446 --rc genhtml_legend=1 00:04:26.446 --rc geninfo_all_blocks=1 00:04:26.446 --rc geninfo_unexecuted_blocks=1 00:04:26.446 00:04:26.446 ' 00:04:26.446 19:42:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:26.446 19:42:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:26.706 19:42:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.706 19:42:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:26.706 19:42:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.706 19:42:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.706 ************************************ 00:04:26.706 START TEST event_perf 00:04:26.706 ************************************ 00:04:26.706 19:42:27 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.706 Running I/O for 1 seconds...[2024-11-26 19:42:27.327944] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:26.706 [2024-11-26 19:42:27.328049] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391648 ] 00:04:26.706 [2024-11-26 19:42:27.420441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.706 [2024-11-26 19:42:27.466780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.706 [2024-11-26 19:42:27.466938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:26.706 [2024-11-26 19:42:27.467093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.706 Running I/O for 1 seconds...[2024-11-26 19:42:27.467094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:28.087 00:04:28.087 lcore 0: 180280 00:04:28.087 lcore 1: 180282 00:04:28.087 lcore 2: 180280 00:04:28.087 lcore 3: 180279 00:04:28.087 done. 00:04:28.087 00:04:28.087 real 0m1.190s 00:04:28.087 user 0m4.094s 00:04:28.087 sys 0m0.091s 00:04:28.087 19:42:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.087 19:42:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.087 ************************************ 00:04:28.087 END TEST event_perf 00:04:28.087 ************************************ 00:04:28.087 19:42:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.087 19:42:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:28.087 19:42:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.087 19:42:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.087 ************************************ 00:04:28.087 START TEST event_reactor 00:04:28.087 ************************************ 00:04:28.087 19:42:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:28.087 [2024-11-26 19:42:28.594961] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:28.087 [2024-11-26 19:42:28.595056] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392004 ] 00:04:28.087 [2024-11-26 19:42:28.686178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.087 [2024-11-26 19:42:28.719607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.037 test_start 00:04:29.037 oneshot 00:04:29.037 tick 100 00:04:29.037 tick 100 00:04:29.037 tick 250 00:04:29.037 tick 100 00:04:29.037 tick 100 00:04:29.037 tick 100 00:04:29.037 tick 250 00:04:29.037 tick 500 00:04:29.037 tick 100 00:04:29.037 tick 100 00:04:29.037 tick 250 00:04:29.037 tick 100 00:04:29.037 tick 100 00:04:29.037 test_end 00:04:29.037 00:04:29.037 real 0m1.176s 00:04:29.037 user 0m1.093s 00:04:29.037 sys 0m0.078s 00:04:29.037 19:42:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.037 19:42:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:29.037 ************************************ 00:04:29.037 END TEST event_reactor 00:04:29.037 ************************************ 00:04:29.037 19:42:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.037 19:42:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:29.037 19:42:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.037 19:42:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.037 ************************************ 00:04:29.037 START TEST event_reactor_perf 00:04:29.037 ************************************ 00:04:29.037 19:42:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:29.037 [2024-11-26 19:42:29.844680] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:29.037 [2024-11-26 19:42:29.844785] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392426 ] 00:04:29.338 [2024-11-26 19:42:29.932002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.338 [2024-11-26 19:42:29.960941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.337 test_start 00:04:30.337 test_end 00:04:30.337 Performance: 540838 events per second 00:04:30.337 00:04:30.337 real 0m1.165s 00:04:30.337 user 0m1.081s 00:04:30.337 sys 0m0.080s 00:04:30.337 19:42:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.337 19:42:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.337 ************************************ 00:04:30.337 END TEST event_reactor_perf 00:04:30.337 ************************************ 00:04:30.337 19:42:31 event -- event/event.sh@49 -- # uname -s 00:04:30.337 19:42:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:30.337 19:42:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:30.337 19:42:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.337 19:42:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.337 19:42:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.337 ************************************ 00:04:30.337 START TEST event_scheduler 00:04:30.337 ************************************ 00:04:30.337 19:42:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:30.598 * Looking for test storage... 00:04:30.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.598 19:42:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.598 --rc genhtml_branch_coverage=1 00:04:30.598 --rc genhtml_function_coverage=1 00:04:30.598 --rc genhtml_legend=1 00:04:30.598 --rc geninfo_all_blocks=1 00:04:30.598 --rc geninfo_unexecuted_blocks=1 00:04:30.598 00:04:30.598 ' 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.598 --rc genhtml_branch_coverage=1 00:04:30.598 --rc genhtml_function_coverage=1 00:04:30.598 --rc genhtml_legend=1 00:04:30.598 --rc geninfo_all_blocks=1 00:04:30.598 --rc geninfo_unexecuted_blocks=1 00:04:30.598 00:04:30.598 ' 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.598 --rc genhtml_branch_coverage=1 00:04:30.598 --rc genhtml_function_coverage=1 00:04:30.598 --rc genhtml_legend=1 00:04:30.598 --rc geninfo_all_blocks=1 00:04:30.598 --rc geninfo_unexecuted_blocks=1 00:04:30.598 00:04:30.598 ' 00:04:30.598 19:42:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.598 --rc genhtml_branch_coverage=1 00:04:30.598 --rc genhtml_function_coverage=1 00:04:30.598 --rc genhtml_legend=1 00:04:30.599 --rc geninfo_all_blocks=1 00:04:30.599 --rc geninfo_unexecuted_blocks=1 00:04:30.599 00:04:30.599 ' 00:04:30.599 19:42:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:30.599 19:42:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3392857 00:04:30.599 19:42:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.599 19:42:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3392857 00:04:30.599 19:42:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3392857 ']' 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.599 19:42:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.599 [2024-11-26 19:42:31.330771] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:30.599 [2024-11-26 19:42:31.330842] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392857 ] 00:04:30.858 [2024-11-26 19:42:31.425348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:30.858 [2024-11-26 19:42:31.480714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.858 [2024-11-26 19:42:31.480881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.858 [2024-11-26 19:42:31.481043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.858 [2024-11-26 19:42:31.481043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:31.430 19:42:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.430 [2024-11-26 19:42:32.199555] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:31.430 [2024-11-26 19:42:32.199574] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:31.430 [2024-11-26 19:42:32.199584] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:31.430 [2024-11-26 19:42:32.199591] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:31.430 [2024-11-26 19:42:32.199596] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.430 19:42:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.430 19:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 [2024-11-26 19:42:32.262495] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:31.692 19:42:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:31.692 19:42:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.692 19:42:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 ************************************ 00:04:31.692 START TEST scheduler_create_thread 00:04:31.692 ************************************ 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 2 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 3 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 4 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 5 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 6 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 7 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 8 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.692 9 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.692 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.264 10 00:04:32.264 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.265 19:42:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:32.265 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.265 19:42:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.646 19:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.646 19:42:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:33.646 19:42:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:33.646 19:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.646 19:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.222 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.222 19:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.222 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.222 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.163 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.163 19:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:35.163 19:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:35.163 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.163 19:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.733 19:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.733 00:04:35.733 real 0m4.224s 00:04:35.733 user 0m0.026s 00:04:35.733 sys 0m0.006s 00:04:35.733 19:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.733 19:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.733 ************************************ 00:04:35.733 END TEST scheduler_create_thread 00:04:35.733 ************************************ 00:04:35.993 19:42:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:35.993 19:42:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3392857 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3392857 ']' 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3392857 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3392857 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3392857' 00:04:35.993 killing process with pid 3392857 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3392857 00:04:35.993 19:42:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3392857 00:04:35.993 [2024-11-26 19:42:36.808225] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:36.253 00:04:36.253 real 0m5.892s 00:04:36.253 user 0m13.123s 00:04:36.253 sys 0m0.444s 00:04:36.253 19:42:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.253 19:42:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:36.253 ************************************ 00:04:36.253 END TEST event_scheduler 00:04:36.253 ************************************ 00:04:36.253 19:42:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:36.254 19:42:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:36.254 19:42:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.254 19:42:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.254 19:42:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.254 ************************************ 00:04:36.254 START TEST app_repeat 00:04:36.254 ************************************ 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3394199 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3394199' 00:04:36.254 Process app_repeat pid: 3394199 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:36.254 spdk_app_start Round 0 00:04:36.254 19:42:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3394199 /var/tmp/spdk-nbd.sock 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3394199 ']' 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.254 19:42:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.514 [2024-11-26 19:42:37.083033] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:36.514 [2024-11-26 19:42:37.083090] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3394199 ] 00:04:36.514 [2024-11-26 19:42:37.168018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.514 [2024-11-26 19:42:37.200286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.514 [2024-11-26 19:42:37.200437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.514 19:42:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.514 19:42:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:36.514 19:42:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.774 Malloc0 00:04:36.774 19:42:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.034 Malloc1 00:04:37.034 19:42:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.034 19:42:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.035 19:42:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.295 /dev/nbd0 00:04:37.295 19:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.295 19:42:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.295 1+0 records in 00:04:37.295 1+0 records out 00:04:37.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261724 s, 15.7 MB/s 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.295 19:42:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.295 19:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.295 19:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.295 19:42:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.295 /dev/nbd1 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.556 1+0 records in 00:04:37.556 1+0 records out 00:04:37.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275336 s, 14.9 MB/s 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:37.556 19:42:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.556 { 00:04:37.556 "nbd_device": "/dev/nbd0", 00:04:37.556 "bdev_name": "Malloc0" 00:04:37.556 }, 00:04:37.556 { 00:04:37.556 "nbd_device": "/dev/nbd1", 00:04:37.556 "bdev_name": "Malloc1" 00:04:37.556 } 00:04:37.556 ]' 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.556 { 00:04:37.556 "nbd_device": "/dev/nbd0", 00:04:37.556 "bdev_name": "Malloc0" 00:04:37.556 }, 00:04:37.556 { 00:04:37.556 "nbd_device": "/dev/nbd1", 00:04:37.556 "bdev_name": "Malloc1" 00:04:37.556 } 00:04:37.556 ]' 00:04:37.556 19:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.816 /dev/nbd1' 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.816 /dev/nbd1' 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.816 19:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.817 256+0 records in 00:04:37.817 256+0 records out 00:04:37.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127286 s, 82.4 MB/s 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.817 256+0 records in 00:04:37.817 256+0 records out 00:04:37.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119528 s, 87.7 MB/s 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.817 256+0 records in 00:04:37.817 256+0 records out 00:04:37.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129278 s, 81.1 MB/s 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.817 19:42:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.078 19:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.341 19:42:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.341 19:42:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.601 19:42:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:38.601 [2024-11-26 19:42:39.346493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.601 [2024-11-26 19:42:39.376455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.601 [2024-11-26 19:42:39.376455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.601 [2024-11-26 19:42:39.405256] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.601 [2024-11-26 19:42:39.405287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.927 19:42:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.927 19:42:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:41.927 spdk_app_start Round 1 00:04:41.927 19:42:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3394199 /var/tmp/spdk-nbd.sock 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3394199 ']' 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.927 19:42:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:41.927 19:42:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.927 Malloc0 00:04:41.927 19:42:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.188 Malloc1 00:04:42.188 19:42:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.188 19:42:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.188 /dev/nbd0 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.449 1+0 records in 00:04:42.449 1+0 records out 00:04:42.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272816 s, 15.0 MB/s 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.449 /dev/nbd1 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.449 1+0 records in 00:04:42.449 1+0 records out 00:04:42.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286097 s, 14.3 MB/s 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.449 19:42:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.449 19:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.711 { 00:04:42.711 "nbd_device": "/dev/nbd0", 00:04:42.711 "bdev_name": "Malloc0" 00:04:42.711 }, 00:04:42.711 { 00:04:42.711 "nbd_device": "/dev/nbd1", 00:04:42.711 "bdev_name": "Malloc1" 00:04:42.711 } 00:04:42.711 ]' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.711 { 00:04:42.711 "nbd_device": "/dev/nbd0", 00:04:42.711 "bdev_name": "Malloc0" 00:04:42.711 }, 00:04:42.711 { 00:04:42.711 "nbd_device": "/dev/nbd1", 00:04:42.711 "bdev_name": "Malloc1" 00:04:42.711 } 00:04:42.711 ]' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.711 /dev/nbd1' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.711 /dev/nbd1' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.711 256+0 records in 00:04:42.711 256+0 records out 00:04:42.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127647 s, 82.1 MB/s 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.711 19:42:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.972 256+0 records in 00:04:42.972 256+0 records out 00:04:42.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122315 s, 85.7 MB/s 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.972 256+0 records in 00:04:42.972 256+0 records out 00:04:42.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132318 s, 79.2 MB/s 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.972 19:42:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.234 19:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.495 19:42:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.495 19:42:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.756 19:42:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.756 [2024-11-26 19:42:44.482535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.756 [2024-11-26 19:42:44.512529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.756 [2024-11-26 19:42:44.512529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.756 [2024-11-26 19:42:44.542048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.756 [2024-11-26 19:42:44.542077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.058 19:42:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.058 19:42:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:47.059 spdk_app_start Round 2 00:04:47.059 19:42:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3394199 /var/tmp/spdk-nbd.sock 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3394199 ']' 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.059 19:42:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.059 19:42:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.059 Malloc0 00:04:47.059 19:42:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.318 Malloc1 00:04:47.318 19:42:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.318 19:42:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.318 /dev/nbd0 00:04:47.318 19:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.318 19:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.318 19:42:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:47.318 19:42:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.318 19:42:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.318 19:42:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.319 1+0 records in 00:04:47.319 1+0 records out 00:04:47.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274619 s, 14.9 MB/s 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.319 19:42:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.319 19:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.319 19:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.319 19:42:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.579 /dev/nbd1 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.579 1+0 records in 00:04:47.579 1+0 records out 00:04:47.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020273 s, 20.2 MB/s 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.579 19:42:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.579 19:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.840 { 00:04:47.840 "nbd_device": "/dev/nbd0", 00:04:47.840 "bdev_name": "Malloc0" 00:04:47.840 }, 00:04:47.840 { 00:04:47.840 "nbd_device": "/dev/nbd1", 00:04:47.840 "bdev_name": "Malloc1" 00:04:47.840 } 00:04:47.840 ]' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.840 { 00:04:47.840 "nbd_device": "/dev/nbd0", 00:04:47.840 "bdev_name": "Malloc0" 00:04:47.840 }, 00:04:47.840 { 00:04:47.840 "nbd_device": "/dev/nbd1", 00:04:47.840 "bdev_name": "Malloc1" 00:04:47.840 } 00:04:47.840 ]' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.840 /dev/nbd1' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.840 /dev/nbd1' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.840 256+0 records in 00:04:47.840 256+0 records out 00:04:47.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127698 s, 82.1 MB/s 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.840 256+0 records in 00:04:47.840 256+0 records out 00:04:47.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119445 s, 87.8 MB/s 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.840 256+0 records in 00:04:47.840 256+0 records out 00:04:47.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132082 s, 79.4 MB/s 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.840 19:42:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.101 19:42:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.363 19:42:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.639 19:42:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.639 19:42:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.900 19:42:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.900 [2024-11-26 19:42:49.533098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.900 [2024-11-26 19:42:49.563692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.900 [2024-11-26 19:42:49.563692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.900 [2024-11-26 19:42:49.592897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.900 [2024-11-26 19:42:49.592924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.202 19:42:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3394199 /var/tmp/spdk-nbd.sock 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3394199 ']' 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.202 19:42:52 event.app_repeat -- event/event.sh@39 -- # killprocess 3394199 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3394199 ']' 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3394199 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3394199 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3394199' 00:04:52.202 killing process with pid 3394199 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3394199 00:04:52.202 19:42:52 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3394199 00:04:52.202 spdk_app_start is called in Round 0. 00:04:52.202 Shutdown signal received, stop current app iteration 00:04:52.202 Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 reinitialization... 00:04:52.202 spdk_app_start is called in Round 1. 00:04:52.202 Shutdown signal received, stop current app iteration 00:04:52.202 Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 reinitialization... 00:04:52.203 spdk_app_start is called in Round 2. 00:04:52.203 Shutdown signal received, stop current app iteration 00:04:52.203 Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 reinitialization... 00:04:52.203 spdk_app_start is called in Round 3. 00:04:52.203 Shutdown signal received, stop current app iteration 00:04:52.203 19:42:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:52.203 19:42:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:52.203 00:04:52.203 real 0m15.756s 00:04:52.203 user 0m34.457s 00:04:52.203 sys 0m2.367s 00:04:52.203 19:42:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.203 19:42:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.203 ************************************ 00:04:52.203 END TEST app_repeat 00:04:52.203 ************************************ 00:04:52.203 19:42:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:52.203 19:42:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.203 19:42:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.203 19:42:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.203 19:42:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.203 ************************************ 00:04:52.203 START TEST cpu_locks 00:04:52.203 ************************************ 00:04:52.203 19:42:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.203 * Looking for test storage... 00:04:52.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:52.203 19:42:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.203 19:42:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.203 19:42:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.465 19:42:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.465 --rc genhtml_branch_coverage=1 00:04:52.465 --rc genhtml_function_coverage=1 00:04:52.465 --rc genhtml_legend=1 00:04:52.465 --rc geninfo_all_blocks=1 00:04:52.465 --rc geninfo_unexecuted_blocks=1 00:04:52.465 00:04:52.465 ' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.465 --rc genhtml_branch_coverage=1 00:04:52.465 --rc genhtml_function_coverage=1 00:04:52.465 --rc genhtml_legend=1 00:04:52.465 --rc geninfo_all_blocks=1 00:04:52.465 --rc geninfo_unexecuted_blocks=1 00:04:52.465 00:04:52.465 ' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.465 --rc genhtml_branch_coverage=1 00:04:52.465 --rc genhtml_function_coverage=1 00:04:52.465 --rc genhtml_legend=1 00:04:52.465 --rc geninfo_all_blocks=1 00:04:52.465 --rc geninfo_unexecuted_blocks=1 00:04:52.465 00:04:52.465 ' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.465 --rc genhtml_branch_coverage=1 00:04:52.465 --rc genhtml_function_coverage=1 00:04:52.465 --rc genhtml_legend=1 00:04:52.465 --rc geninfo_all_blocks=1 00:04:52.465 --rc geninfo_unexecuted_blocks=1 00:04:52.465 00:04:52.465 ' 00:04:52.465 19:42:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:52.465 19:42:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:52.465 19:42:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:52.465 19:42:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.465 19:42:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.465 ************************************ 00:04:52.465 START TEST default_locks 00:04:52.465 ************************************ 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3398158 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3398158 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3398158 ']' 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.465 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.465 [2024-11-26 19:42:53.170627] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:52.465 [2024-11-26 19:42:53.170693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398158 ] 00:04:52.465 [2024-11-26 19:42:53.249477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.727 [2024-11-26 19:42:53.291322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.727 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.727 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:52.727 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3398158 00:04:52.727 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3398158 00:04:52.727 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.987 lslocks: write error 00:04:52.987 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3398158 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3398158 ']' 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3398158 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398158 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398158' 00:04:52.988 killing process with pid 3398158 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3398158 00:04:52.988 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3398158 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3398158 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3398158 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3398158 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3398158 ']' 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3398158) - No such process 00:04:53.249 ERROR: process (pid: 3398158) is no longer running 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.249 00:04:53.249 real 0m0.833s 00:04:53.249 user 0m0.839s 00:04:53.249 sys 0m0.445s 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.249 19:42:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.249 ************************************ 00:04:53.249 END TEST default_locks 00:04:53.249 ************************************ 00:04:53.249 19:42:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:53.249 19:42:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.249 19:42:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.249 19:42:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.249 ************************************ 00:04:53.249 START TEST default_locks_via_rpc 00:04:53.249 ************************************ 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3398203 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3398203 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3398203 ']' 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.249 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.510 [2024-11-26 19:42:54.088143] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:53.510 [2024-11-26 19:42:54.088210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398203 ] 00:04:53.510 [2024-11-26 19:42:54.177002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.510 [2024-11-26 19:42:54.212938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3398203 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3398203 00:04:54.081 19:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3398203 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3398203 ']' 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3398203 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398203 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398203' 00:04:54.650 killing process with pid 3398203 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3398203 00:04:54.650 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3398203 00:04:54.910 00:04:54.910 real 0m1.509s 00:04:54.910 user 0m1.620s 00:04:54.910 sys 0m0.532s 00:04:54.910 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.910 19:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.910 ************************************ 00:04:54.910 END TEST default_locks_via_rpc 00:04:54.910 ************************************ 00:04:54.910 19:42:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:54.910 19:42:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.910 19:42:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.910 19:42:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.910 ************************************ 00:04:54.910 START TEST non_locking_app_on_locked_coremask 00:04:54.910 ************************************ 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3398602 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3398602 /var/tmp/spdk.sock 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3398602 ']' 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.910 19:42:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.910 [2024-11-26 19:42:55.664480] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:54.910 [2024-11-26 19:42:55.664530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398602 ] 00:04:55.169 [2024-11-26 19:42:55.747169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.169 [2024-11-26 19:42:55.777401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3398927 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3398927 /var/tmp/spdk2.sock 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3398927 ']' 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.739 19:42:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.739 [2024-11-26 19:42:56.501248] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:55.739 [2024-11-26 19:42:56.501299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398927 ] 00:04:55.998 [2024-11-26 19:42:56.587141] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.998 [2024-11-26 19:42:56.587168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.998 [2024-11-26 19:42:56.645307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.568 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.568 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.568 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3398602 00:04:56.568 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3398602 00:04:56.568 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.138 lslocks: write error 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3398602 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3398602 ']' 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3398602 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398602 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398602' 00:04:57.138 killing process with pid 3398602 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3398602 00:04:57.138 19:42:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3398602 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3398927 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3398927 ']' 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3398927 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3398927 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3398927' 00:04:57.707 killing process with pid 3398927 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3398927 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3398927 00:04:57.707 00:04:57.707 real 0m2.873s 00:04:57.707 user 0m3.203s 00:04:57.707 sys 0m0.871s 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.707 19:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.707 ************************************ 00:04:57.707 END TEST non_locking_app_on_locked_coremask 00:04:57.707 ************************************ 00:04:57.707 19:42:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:57.707 19:42:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.707 19:42:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.707 19:42:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.967 ************************************ 00:04:57.967 START TEST locking_app_on_unlocked_coremask 00:04:57.967 ************************************ 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3399346 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3399346 /var/tmp/spdk.sock 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3399346 ']' 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.967 19:42:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.967 [2024-11-26 19:42:58.629799] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:57.967 [2024-11-26 19:42:58.629852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399346 ] 00:04:57.967 [2024-11-26 19:42:58.712446] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:57.967 [2024-11-26 19:42:58.712467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.967 [2024-11-26 19:42:58.742789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3399461 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3399461 /var/tmp/spdk2.sock 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3399461 ']' 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.907 19:42:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.907 [2024-11-26 19:42:59.459986] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:04:58.907 [2024-11-26 19:42:59.460041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399461 ] 00:04:58.907 [2024-11-26 19:42:59.548019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.907 [2024-11-26 19:42:59.604424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.476 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.476 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.476 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3399461 00:04:59.476 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3399461 00:04:59.476 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.046 lslocks: write error 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3399346 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3399346 ']' 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3399346 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399346 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399346' 00:05:00.046 killing process with pid 3399346 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3399346 00:05:00.046 19:43:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3399346 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3399461 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3399461 ']' 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3399461 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399461 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399461' 00:05:00.306 killing process with pid 3399461 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3399461 00:05:00.306 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3399461 00:05:00.567 00:05:00.567 real 0m2.727s 00:05:00.567 user 0m3.049s 00:05:00.567 sys 0m0.822s 00:05:00.567 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.567 19:43:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.567 ************************************ 00:05:00.567 END TEST locking_app_on_unlocked_coremask 00:05:00.567 ************************************ 00:05:00.568 19:43:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:00.568 19:43:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.568 19:43:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.568 19:43:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.568 ************************************ 00:05:00.568 START TEST locking_app_on_locked_coremask 00:05:00.568 ************************************ 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3399993 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3399993 /var/tmp/spdk.sock 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3399993 ']' 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.568 19:43:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.828 [2024-11-26 19:43:01.426034] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:00.828 [2024-11-26 19:43:01.426086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399993 ] 00:05:00.828 [2024-11-26 19:43:01.513256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.828 [2024-11-26 19:43:01.545107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3400110 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3400110 /var/tmp/spdk2.sock 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3400110 /var/tmp/spdk2.sock 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3400110 /var/tmp/spdk2.sock 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3400110 ']' 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.773 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.773 [2024-11-26 19:43:02.274098] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:01.773 [2024-11-26 19:43:02.274148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400110 ] 00:05:01.773 [2024-11-26 19:43:02.361279] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3399993 has claimed it. 00:05:01.773 [2024-11-26 19:43:02.361314] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3400110) - No such process 00:05:02.342 ERROR: process (pid: 3400110) is no longer running 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3399993 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3399993 00:05:02.342 19:43:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.603 lslocks: write error 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3399993 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3399993 ']' 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3399993 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3399993 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3399993' 00:05:02.603 killing process with pid 3399993 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3399993 00:05:02.603 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3399993 00:05:02.864 00:05:02.864 real 0m2.111s 00:05:02.864 user 0m2.395s 00:05:02.864 sys 0m0.577s 00:05:02.864 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.864 19:43:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.864 ************************************ 00:05:02.864 END TEST locking_app_on_locked_coremask 00:05:02.864 ************************************ 00:05:02.864 19:43:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:02.864 19:43:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.864 19:43:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.864 19:43:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.864 ************************************ 00:05:02.864 START TEST locking_overlapped_coremask 00:05:02.864 ************************************ 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3400470 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3400470 /var/tmp/spdk.sock 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3400470 ']' 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.864 19:43:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.864 [2024-11-26 19:43:03.613743] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:02.864 [2024-11-26 19:43:03.613797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400470 ] 00:05:03.124 [2024-11-26 19:43:03.698502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.124 [2024-11-26 19:43:03.732639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.124 [2024-11-26 19:43:03.732784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.124 [2024-11-26 19:43:03.732786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3400573 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3400573 /var/tmp/spdk2.sock 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3400573 /var/tmp/spdk2.sock 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3400573 /var/tmp/spdk2.sock 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3400573 ']' 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.696 19:43:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.696 [2024-11-26 19:43:04.470917] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:03.696 [2024-11-26 19:43:04.470971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400573 ] 00:05:03.957 [2024-11-26 19:43:04.584353] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3400470 has claimed it. 00:05:03.957 [2024-11-26 19:43:04.584393] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:04.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3400573) - No such process 00:05:04.528 ERROR: process (pid: 3400573) is no longer running 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:04.528 19:43:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3400470 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3400470 ']' 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3400470 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3400470 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3400470' 00:05:04.529 killing process with pid 3400470 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3400470 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3400470 00:05:04.529 00:05:04.529 real 0m1.787s 00:05:04.529 user 0m5.177s 00:05:04.529 sys 0m0.390s 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.529 19:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.529 ************************************ 00:05:04.529 END TEST locking_overlapped_coremask 00:05:04.529 ************************************ 00:05:04.789 19:43:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:04.789 19:43:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.789 19:43:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.789 19:43:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.789 ************************************ 00:05:04.789 START TEST locking_overlapped_coremask_via_rpc 00:05:04.789 ************************************ 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3400871 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3400871 /var/tmp/spdk.sock 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3400871 ']' 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.789 19:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.789 [2024-11-26 19:43:05.477472] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:04.789 [2024-11-26 19:43:05.477526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400871 ] 00:05:04.789 [2024-11-26 19:43:05.564039] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.789 [2024-11-26 19:43:05.564062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.789 [2024-11-26 19:43:05.599213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.789 [2024-11-26 19:43:05.599556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.789 [2024-11-26 19:43:05.599557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3401043 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3401043 /var/tmp/spdk2.sock 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3401043 ']' 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.731 19:43:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.731 [2024-11-26 19:43:06.318871] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:05.731 [2024-11-26 19:43:06.318922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401043 ] 00:05:05.731 [2024-11-26 19:43:06.431293] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.731 [2024-11-26 19:43:06.431320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.731 [2024-11-26 19:43:06.505198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.731 [2024-11-26 19:43:06.508282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.731 [2024-11-26 19:43:06.508283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.302 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.563 [2024-11-26 19:43:07.125237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3400871 has claimed it. 00:05:06.563 request: 00:05:06.563 { 00:05:06.563 "method": "framework_enable_cpumask_locks", 00:05:06.563 "req_id": 1 00:05:06.563 } 00:05:06.563 Got JSON-RPC error response 00:05:06.563 response: 00:05:06.563 { 00:05:06.563 "code": -32603, 00:05:06.563 "message": "Failed to claim CPU core: 2" 00:05:06.563 } 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3400871 /var/tmp/spdk.sock 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3400871 ']' 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3401043 /var/tmp/spdk2.sock 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3401043 ']' 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.563 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:06.825 00:05:06.825 real 0m2.071s 00:05:06.825 user 0m0.842s 00:05:06.825 sys 0m0.152s 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.825 19:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.825 ************************************ 00:05:06.825 END TEST locking_overlapped_coremask_via_rpc 00:05:06.825 ************************************ 00:05:06.825 19:43:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:06.825 19:43:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3400871 ]] 00:05:06.825 19:43:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3400871 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3400871 ']' 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3400871 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3400871 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3400871' 00:05:06.825 killing process with pid 3400871 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3400871 00:05:06.825 19:43:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3400871 00:05:07.085 19:43:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3401043 ]] 00:05:07.085 19:43:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3401043 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3401043 ']' 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3401043 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3401043 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3401043' 00:05:07.085 killing process with pid 3401043 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3401043 00:05:07.085 19:43:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3401043 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3400871 ]] 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3400871 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3400871 ']' 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3400871 00:05:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3400871) - No such process 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3400871 is not found' 00:05:07.346 Process with pid 3400871 is not found 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3401043 ]] 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3401043 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3401043 ']' 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3401043 00:05:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3401043) - No such process 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3401043 is not found' 00:05:07.346 Process with pid 3401043 is not found 00:05:07.346 19:43:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:07.346 00:05:07.346 real 0m15.178s 00:05:07.346 user 0m27.107s 00:05:07.346 sys 0m4.746s 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.346 19:43:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.346 ************************************ 00:05:07.346 END TEST cpu_locks 00:05:07.346 ************************************ 00:05:07.346 00:05:07.346 real 0m41.026s 00:05:07.346 user 1m21.211s 00:05:07.346 sys 0m8.254s 00:05:07.346 19:43:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.346 19:43:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.346 ************************************ 00:05:07.346 END TEST event 00:05:07.346 ************************************ 00:05:07.346 19:43:08 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:07.346 19:43:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.346 19:43:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.346 19:43:08 -- common/autotest_common.sh@10 -- # set +x 00:05:07.608 ************************************ 00:05:07.608 START TEST thread 00:05:07.608 ************************************ 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:07.608 * Looking for test storage... 00:05:07.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.608 19:43:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.608 19:43:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.608 19:43:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.608 19:43:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.608 19:43:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.608 19:43:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.608 19:43:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.608 19:43:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.608 19:43:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.608 19:43:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.608 19:43:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.608 19:43:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:07.608 19:43:08 thread -- scripts/common.sh@345 -- # : 1 00:05:07.608 19:43:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.608 19:43:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.608 19:43:08 thread -- scripts/common.sh@365 -- # decimal 1 00:05:07.608 19:43:08 thread -- scripts/common.sh@353 -- # local d=1 00:05:07.608 19:43:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.608 19:43:08 thread -- scripts/common.sh@355 -- # echo 1 00:05:07.608 19:43:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.608 19:43:08 thread -- scripts/common.sh@366 -- # decimal 2 00:05:07.608 19:43:08 thread -- scripts/common.sh@353 -- # local d=2 00:05:07.608 19:43:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.608 19:43:08 thread -- scripts/common.sh@355 -- # echo 2 00:05:07.608 19:43:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.608 19:43:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.608 19:43:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.608 19:43:08 thread -- scripts/common.sh@368 -- # return 0 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.608 --rc genhtml_branch_coverage=1 00:05:07.608 --rc genhtml_function_coverage=1 00:05:07.608 --rc genhtml_legend=1 00:05:07.608 --rc geninfo_all_blocks=1 00:05:07.608 --rc geninfo_unexecuted_blocks=1 00:05:07.608 00:05:07.608 ' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.608 --rc genhtml_branch_coverage=1 00:05:07.608 --rc genhtml_function_coverage=1 00:05:07.608 --rc genhtml_legend=1 00:05:07.608 --rc geninfo_all_blocks=1 00:05:07.608 --rc geninfo_unexecuted_blocks=1 00:05:07.608 00:05:07.608 ' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.608 --rc genhtml_branch_coverage=1 00:05:07.608 --rc genhtml_function_coverage=1 00:05:07.608 --rc genhtml_legend=1 00:05:07.608 --rc geninfo_all_blocks=1 00:05:07.608 --rc geninfo_unexecuted_blocks=1 00:05:07.608 00:05:07.608 ' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.608 --rc genhtml_branch_coverage=1 00:05:07.608 --rc genhtml_function_coverage=1 00:05:07.608 --rc genhtml_legend=1 00:05:07.608 --rc geninfo_all_blocks=1 00:05:07.608 --rc geninfo_unexecuted_blocks=1 00:05:07.608 00:05:07.608 ' 00:05:07.608 19:43:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.608 19:43:08 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.608 ************************************ 00:05:07.608 START TEST thread_poller_perf 00:05:07.608 ************************************ 00:05:07.608 19:43:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:07.869 [2024-11-26 19:43:08.445436] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:07.869 [2024-11-26 19:43:08.445541] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401675 ] 00:05:07.869 [2024-11-26 19:43:08.531127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.869 [2024-11-26 19:43:08.562845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.869 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:08.858 [2024-11-26T18:43:09.679Z] ====================================== 00:05:08.858 [2024-11-26T18:43:09.679Z] busy:2409952272 (cyc) 00:05:08.858 [2024-11-26T18:43:09.679Z] total_run_count: 418000 00:05:08.858 [2024-11-26T18:43:09.679Z] tsc_hz: 2400000000 (cyc) 00:05:08.858 [2024-11-26T18:43:09.679Z] ====================================== 00:05:08.858 [2024-11-26T18:43:09.679Z] poller_cost: 5765 (cyc), 2402 (nsec) 00:05:08.858 00:05:08.858 real 0m1.173s 00:05:08.858 user 0m1.097s 00:05:08.858 sys 0m0.072s 00:05:08.858 19:43:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.858 19:43:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 ************************************ 00:05:08.858 END TEST thread_poller_perf 00:05:08.858 ************************************ 00:05:08.858 19:43:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.858 19:43:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:08.858 19:43:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.858 19:43:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.858 ************************************ 00:05:08.858 START TEST thread_poller_perf 00:05:08.858 ************************************ 00:05:09.119 19:43:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.119 [2024-11-26 19:43:09.698668] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:09.119 [2024-11-26 19:43:09.698767] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401903 ] 00:05:09.119 [2024-11-26 19:43:09.787498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.119 [2024-11-26 19:43:09.826184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.119 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:10.058 [2024-11-26T18:43:10.879Z] ====================================== 00:05:10.058 [2024-11-26T18:43:10.879Z] busy:2401492704 (cyc) 00:05:10.058 [2024-11-26T18:43:10.879Z] total_run_count: 5559000 00:05:10.058 [2024-11-26T18:43:10.879Z] tsc_hz: 2400000000 (cyc) 00:05:10.058 [2024-11-26T18:43:10.879Z] ====================================== 00:05:10.058 [2024-11-26T18:43:10.879Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:10.058 00:05:10.058 real 0m1.177s 00:05:10.058 user 0m1.083s 00:05:10.058 sys 0m0.089s 00:05:10.058 19:43:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.058 19:43:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.058 ************************************ 00:05:10.058 END TEST thread_poller_perf 00:05:10.058 ************************************ 00:05:10.320 19:43:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:10.320 00:05:10.320 real 0m2.720s 00:05:10.320 user 0m2.365s 00:05:10.320 sys 0m0.371s 00:05:10.320 19:43:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.320 19:43:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.320 ************************************ 00:05:10.320 END TEST thread 00:05:10.320 ************************************ 00:05:10.320 19:43:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:10.320 19:43:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:10.320 19:43:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.320 19:43:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.320 19:43:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.320 ************************************ 00:05:10.320 START TEST app_cmdline 00:05:10.320 ************************************ 00:05:10.320 19:43:10 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:10.320 * Looking for test storage... 00:05:10.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:10.320 19:43:11 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.320 19:43:11 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.320 19:43:11 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.581 19:43:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.581 --rc genhtml_branch_coverage=1 00:05:10.581 --rc genhtml_function_coverage=1 00:05:10.581 --rc genhtml_legend=1 00:05:10.581 --rc geninfo_all_blocks=1 00:05:10.581 --rc geninfo_unexecuted_blocks=1 00:05:10.581 00:05:10.581 ' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.581 --rc genhtml_branch_coverage=1 00:05:10.581 --rc genhtml_function_coverage=1 00:05:10.581 --rc genhtml_legend=1 00:05:10.581 --rc geninfo_all_blocks=1 00:05:10.581 --rc geninfo_unexecuted_blocks=1 00:05:10.581 00:05:10.581 ' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.581 --rc genhtml_branch_coverage=1 00:05:10.581 --rc genhtml_function_coverage=1 00:05:10.581 --rc genhtml_legend=1 00:05:10.581 --rc geninfo_all_blocks=1 00:05:10.581 --rc geninfo_unexecuted_blocks=1 00:05:10.581 00:05:10.581 ' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.581 --rc genhtml_branch_coverage=1 00:05:10.581 --rc genhtml_function_coverage=1 00:05:10.581 --rc genhtml_legend=1 00:05:10.581 --rc geninfo_all_blocks=1 00:05:10.581 --rc geninfo_unexecuted_blocks=1 00:05:10.581 00:05:10.581 ' 00:05:10.581 19:43:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:10.581 19:43:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3402222 00:05:10.581 19:43:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3402222 00:05:10.581 19:43:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3402222 ']' 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.581 19:43:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:10.581 [2024-11-26 19:43:11.230154] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:10.581 [2024-11-26 19:43:11.230220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402222 ] 00:05:10.581 [2024-11-26 19:43:11.315625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.581 [2024-11-26 19:43:11.349321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:11.523 { 00:05:11.523 "version": "SPDK v25.01-pre git sha1 0617ba6b2", 00:05:11.523 "fields": { 00:05:11.523 "major": 25, 00:05:11.523 "minor": 1, 00:05:11.523 "patch": 0, 00:05:11.523 "suffix": "-pre", 00:05:11.523 "commit": "0617ba6b2" 00:05:11.523 } 00:05:11.523 } 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:11.523 19:43:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:11.523 19:43:12 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:11.783 request: 00:05:11.783 { 00:05:11.783 "method": "env_dpdk_get_mem_stats", 00:05:11.784 "req_id": 1 00:05:11.784 } 00:05:11.784 Got JSON-RPC error response 00:05:11.784 response: 00:05:11.784 { 00:05:11.784 "code": -32601, 00:05:11.784 "message": "Method not found" 00:05:11.784 } 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.784 19:43:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3402222 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3402222 ']' 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3402222 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3402222 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3402222' 00:05:11.784 killing process with pid 3402222 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 3402222 00:05:11.784 19:43:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 3402222 00:05:12.044 00:05:12.044 real 0m1.711s 00:05:12.044 user 0m2.059s 00:05:12.044 sys 0m0.451s 00:05:12.044 19:43:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.044 19:43:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.044 ************************************ 00:05:12.044 END TEST app_cmdline 00:05:12.044 ************************************ 00:05:12.044 19:43:12 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:12.044 19:43:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.044 19:43:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.044 19:43:12 -- common/autotest_common.sh@10 -- # set +x 00:05:12.044 ************************************ 00:05:12.044 START TEST version 00:05:12.044 ************************************ 00:05:12.044 19:43:12 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:12.044 * Looking for test storage... 00:05:12.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.381 19:43:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.381 19:43:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.381 19:43:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.381 19:43:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.381 19:43:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.381 19:43:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.381 19:43:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.381 19:43:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.381 19:43:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.381 19:43:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.381 19:43:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.381 19:43:12 version -- scripts/common.sh@344 -- # case "$op" in 00:05:12.381 19:43:12 version -- scripts/common.sh@345 -- # : 1 00:05:12.381 19:43:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.381 19:43:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.381 19:43:12 version -- scripts/common.sh@365 -- # decimal 1 00:05:12.381 19:43:12 version -- scripts/common.sh@353 -- # local d=1 00:05:12.381 19:43:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.381 19:43:12 version -- scripts/common.sh@355 -- # echo 1 00:05:12.381 19:43:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.381 19:43:12 version -- scripts/common.sh@366 -- # decimal 2 00:05:12.381 19:43:12 version -- scripts/common.sh@353 -- # local d=2 00:05:12.381 19:43:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.381 19:43:12 version -- scripts/common.sh@355 -- # echo 2 00:05:12.381 19:43:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.381 19:43:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.381 19:43:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.381 19:43:12 version -- scripts/common.sh@368 -- # return 0 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.381 --rc genhtml_branch_coverage=1 00:05:12.381 --rc genhtml_function_coverage=1 00:05:12.381 --rc genhtml_legend=1 00:05:12.381 --rc geninfo_all_blocks=1 00:05:12.381 --rc geninfo_unexecuted_blocks=1 00:05:12.381 00:05:12.381 ' 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.381 --rc genhtml_branch_coverage=1 00:05:12.381 --rc genhtml_function_coverage=1 00:05:12.381 --rc genhtml_legend=1 00:05:12.381 --rc geninfo_all_blocks=1 00:05:12.381 --rc geninfo_unexecuted_blocks=1 00:05:12.381 00:05:12.381 ' 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.381 --rc genhtml_branch_coverage=1 00:05:12.381 --rc genhtml_function_coverage=1 00:05:12.381 --rc genhtml_legend=1 00:05:12.381 --rc geninfo_all_blocks=1 00:05:12.381 --rc geninfo_unexecuted_blocks=1 00:05:12.381 00:05:12.381 ' 00:05:12.381 19:43:12 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.381 --rc genhtml_branch_coverage=1 00:05:12.381 --rc genhtml_function_coverage=1 00:05:12.381 --rc genhtml_legend=1 00:05:12.381 --rc geninfo_all_blocks=1 00:05:12.381 --rc geninfo_unexecuted_blocks=1 00:05:12.381 00:05:12.381 ' 00:05:12.381 19:43:12 version -- app/version.sh@17 -- # get_header_version major 00:05:12.381 19:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # cut -f2 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.381 19:43:12 version -- app/version.sh@17 -- # major=25 00:05:12.381 19:43:12 version -- app/version.sh@18 -- # get_header_version minor 00:05:12.381 19:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # cut -f2 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.381 19:43:12 version -- app/version.sh@18 -- # minor=1 00:05:12.381 19:43:12 version -- app/version.sh@19 -- # get_header_version patch 00:05:12.381 19:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # cut -f2 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.381 19:43:12 version -- app/version.sh@19 -- # patch=0 00:05:12.381 19:43:12 version -- app/version.sh@20 -- # get_header_version suffix 00:05:12.381 19:43:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # cut -f2 00:05:12.381 19:43:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:12.381 19:43:13 version -- app/version.sh@20 -- # suffix=-pre 00:05:12.381 19:43:13 version -- app/version.sh@22 -- # version=25.1 00:05:12.381 19:43:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:12.381 19:43:13 version -- app/version.sh@28 -- # version=25.1rc0 00:05:12.381 19:43:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:12.381 19:43:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:12.381 19:43:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:12.381 19:43:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:12.381 00:05:12.381 real 0m0.288s 00:05:12.381 user 0m0.167s 00:05:12.381 sys 0m0.169s 00:05:12.381 19:43:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.381 19:43:13 version -- common/autotest_common.sh@10 -- # set +x 00:05:12.381 ************************************ 00:05:12.381 END TEST version 00:05:12.381 ************************************ 00:05:12.381 19:43:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:12.381 19:43:13 -- spdk/autotest.sh@194 -- # uname -s 00:05:12.381 19:43:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:12.381 19:43:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.381 19:43:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:12.381 19:43:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:12.381 19:43:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.381 19:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.381 19:43:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:12.381 19:43:13 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:12.381 19:43:13 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:12.381 19:43:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.381 19:43:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.381 19:43:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.665 ************************************ 00:05:12.665 START TEST nvmf_tcp 00:05:12.665 ************************************ 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:12.665 * Looking for test storage... 00:05:12.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.665 19:43:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.665 --rc genhtml_branch_coverage=1 00:05:12.665 --rc genhtml_function_coverage=1 00:05:12.665 --rc genhtml_legend=1 00:05:12.665 --rc geninfo_all_blocks=1 00:05:12.665 --rc geninfo_unexecuted_blocks=1 00:05:12.665 00:05:12.665 ' 00:05:12.665 19:43:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:12.665 19:43:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:12.665 19:43:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.665 19:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.665 ************************************ 00:05:12.665 START TEST nvmf_target_core 00:05:12.665 ************************************ 00:05:12.665 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:12.927 * Looking for test storage... 00:05:12.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.927 --rc genhtml_branch_coverage=1 00:05:12.927 --rc genhtml_function_coverage=1 00:05:12.927 --rc genhtml_legend=1 00:05:12.927 --rc geninfo_all_blocks=1 00:05:12.927 --rc geninfo_unexecuted_blocks=1 00:05:12.927 00:05:12.927 ' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.927 --rc genhtml_branch_coverage=1 00:05:12.927 --rc genhtml_function_coverage=1 00:05:12.927 --rc genhtml_legend=1 00:05:12.927 --rc geninfo_all_blocks=1 00:05:12.927 --rc geninfo_unexecuted_blocks=1 00:05:12.927 00:05:12.927 ' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.927 --rc genhtml_branch_coverage=1 00:05:12.927 --rc genhtml_function_coverage=1 00:05:12.927 --rc genhtml_legend=1 00:05:12.927 --rc geninfo_all_blocks=1 00:05:12.927 --rc geninfo_unexecuted_blocks=1 00:05:12.927 00:05:12.927 ' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.927 --rc genhtml_branch_coverage=1 00:05:12.927 --rc genhtml_function_coverage=1 00:05:12.927 --rc genhtml_legend=1 00:05:12.927 --rc geninfo_all_blocks=1 00:05:12.927 --rc geninfo_unexecuted_blocks=1 00:05:12.927 00:05:12.927 ' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.927 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:12.928 ************************************ 00:05:12.928 START TEST nvmf_abort 00:05:12.928 ************************************ 00:05:12.928 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:13.189 * Looking for test storage... 00:05:13.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.189 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.189 --rc genhtml_branch_coverage=1 00:05:13.189 --rc genhtml_function_coverage=1 00:05:13.190 --rc genhtml_legend=1 00:05:13.190 --rc geninfo_all_blocks=1 00:05:13.190 --rc geninfo_unexecuted_blocks=1 00:05:13.190 00:05:13.190 ' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.190 --rc genhtml_branch_coverage=1 00:05:13.190 --rc genhtml_function_coverage=1 00:05:13.190 --rc genhtml_legend=1 00:05:13.190 --rc geninfo_all_blocks=1 00:05:13.190 --rc geninfo_unexecuted_blocks=1 00:05:13.190 00:05:13.190 ' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.190 --rc genhtml_branch_coverage=1 00:05:13.190 --rc genhtml_function_coverage=1 00:05:13.190 --rc genhtml_legend=1 00:05:13.190 --rc geninfo_all_blocks=1 00:05:13.190 --rc geninfo_unexecuted_blocks=1 00:05:13.190 00:05:13.190 ' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.190 --rc genhtml_branch_coverage=1 00:05:13.190 --rc genhtml_function_coverage=1 00:05:13.190 --rc genhtml_legend=1 00:05:13.190 --rc geninfo_all_blocks=1 00:05:13.190 --rc geninfo_unexecuted_blocks=1 00:05:13.190 00:05:13.190 ' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.190 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.191 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.191 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.191 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.191 19:43:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:21.340 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:21.341 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:21.341 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:21.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:21.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:21.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:21.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:05:21.341 00:05:21.341 --- 10.0.0.2 ping statistics --- 00:05:21.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:21.341 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:21.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:21.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:05:21.341 00:05:21.341 --- 10.0.0.1 ping statistics --- 00:05:21.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:21.341 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.341 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3407238 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3407238 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3407238 ']' 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.342 19:43:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.342 [2024-11-26 19:43:21.657773] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:21.342 [2024-11-26 19:43:21.657843] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:21.342 [2024-11-26 19:43:21.752964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.342 [2024-11-26 19:43:21.786994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:21.342 [2024-11-26 19:43:21.787026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:21.342 [2024-11-26 19:43:21.787033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.342 [2024-11-26 19:43:21.787038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.342 [2024-11-26 19:43:21.787042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:21.342 [2024-11-26 19:43:21.788221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.342 [2024-11-26 19:43:21.788367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.342 [2024-11-26 19:43:21.788369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 [2024-11-26 19:43:22.518928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 Malloc0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 Delay0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 [2024-11-26 19:43:22.602246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.915 19:43:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:22.175 [2024-11-26 19:43:22.795333] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:24.716 Initializing NVMe Controllers 00:05:24.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:24.716 controller IO queue size 128 less than required 00:05:24.716 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:24.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:24.716 Initialization complete. Launching workers. 00:05:24.716 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27666 00:05:24.716 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27727, failed to submit 62 00:05:24.716 success 27670, unsuccessful 57, failed 0 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:24.716 19:43:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:24.716 rmmod nvme_tcp 00:05:24.716 rmmod nvme_fabrics 00:05:24.716 rmmod nvme_keyring 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3407238 ']' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3407238 ']' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3407238' 00:05:24.716 killing process with pid 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3407238 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.716 19:43:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:26.629 00:05:26.629 real 0m13.611s 00:05:26.629 user 0m14.496s 00:05:26.629 sys 0m6.575s 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.629 ************************************ 00:05:26.629 END TEST nvmf_abort 00:05:26.629 ************************************ 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:26.629 ************************************ 00:05:26.629 START TEST nvmf_ns_hotplug_stress 00:05:26.629 ************************************ 00:05:26.629 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:26.890 * Looking for test storage... 00:05:26.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.890 --rc genhtml_branch_coverage=1 00:05:26.890 --rc genhtml_function_coverage=1 00:05:26.890 --rc genhtml_legend=1 00:05:26.890 --rc geninfo_all_blocks=1 00:05:26.890 --rc geninfo_unexecuted_blocks=1 00:05:26.890 00:05:26.890 ' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.890 --rc genhtml_branch_coverage=1 00:05:26.890 --rc genhtml_function_coverage=1 00:05:26.890 --rc genhtml_legend=1 00:05:26.890 --rc geninfo_all_blocks=1 00:05:26.890 --rc geninfo_unexecuted_blocks=1 00:05:26.890 00:05:26.890 ' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.890 --rc genhtml_branch_coverage=1 00:05:26.890 --rc genhtml_function_coverage=1 00:05:26.890 --rc genhtml_legend=1 00:05:26.890 --rc geninfo_all_blocks=1 00:05:26.890 --rc geninfo_unexecuted_blocks=1 00:05:26.890 00:05:26.890 ' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.890 --rc genhtml_branch_coverage=1 00:05:26.890 --rc genhtml_function_coverage=1 00:05:26.890 --rc genhtml_legend=1 00:05:26.890 --rc geninfo_all_blocks=1 00:05:26.890 --rc geninfo_unexecuted_blocks=1 00:05:26.890 00:05:26.890 ' 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.890 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:26.891 19:43:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.031 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:35.031 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:35.031 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:35.031 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:35.031 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:35.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:35.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:35.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:35.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:35.032 19:43:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:35.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:35.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:05:35.032 00:05:35.032 --- 10.0.0.2 ping statistics --- 00:05:35.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.032 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:35.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:35.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:05:35.032 00:05:35.032 --- 10.0.0.1 ping statistics --- 00:05:35.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:35.032 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:35.032 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3412598 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3412598 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3412598 ']' 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.033 19:43:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.033 [2024-11-26 19:43:35.189959] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:05:35.033 [2024-11-26 19:43:35.190029] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.033 [2024-11-26 19:43:35.290801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.033 [2024-11-26 19:43:35.342783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.033 [2024-11-26 19:43:35.342840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.033 [2024-11-26 19:43:35.342849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.033 [2024-11-26 19:43:35.342856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.033 [2024-11-26 19:43:35.342862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.033 [2024-11-26 19:43:35.344783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.033 [2024-11-26 19:43:35.344944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.033 [2024-11-26 19:43:35.344944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.295 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:35.557 [2024-11-26 19:43:36.234766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.557 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:35.818 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:35.818 [2024-11-26 19:43:36.625770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:36.079 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.079 19:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.341 Malloc0 00:05:36.341 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:36.601 Delay0 00:05:36.601 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.861 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.861 NULL1 00:05:36.861 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:37.120 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3413396 00:05:37.120 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:37.120 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:37.120 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.379 19:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.379 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:37.379 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:37.639 true 00:05:37.640 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:37.640 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.901 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.161 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:38.161 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:38.161 true 00:05:38.161 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:38.161 19:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.421 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.682 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:38.682 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:38.682 true 00:05:38.682 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:38.682 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.941 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.201 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:39.201 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:39.201 true 00:05:39.201 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:39.201 19:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.462 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.723 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:39.723 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:39.723 true 00:05:39.983 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:39.983 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.983 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.244 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:40.244 19:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:40.505 true 00:05:40.505 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:40.505 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.505 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.766 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:40.766 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:41.026 true 00:05:41.026 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:41.026 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.026 19:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.287 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:41.287 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:41.548 true 00:05:41.548 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:41.548 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.809 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.809 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:41.809 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:42.070 true 00:05:42.070 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:42.070 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.330 19:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.330 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:42.330 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:42.588 true 00:05:42.588 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:42.589 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.848 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.108 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:43.108 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:43.108 true 00:05:43.108 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:43.108 19:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.368 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.628 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:43.628 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:43.628 true 00:05:43.628 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:43.628 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.889 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.148 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:44.148 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:44.411 true 00:05:44.411 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:44.411 19:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.411 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.673 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:44.673 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:44.934 true 00:05:44.934 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:44.934 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.934 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.195 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:45.195 19:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:45.455 true 00:05:45.455 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:45.455 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.716 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.716 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:45.716 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:45.977 true 00:05:45.977 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:45.977 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.237 19:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.237 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:46.237 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:46.497 true 00:05:46.497 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:46.497 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.757 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.018 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:47.018 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:47.018 true 00:05:47.018 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:47.018 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.278 19:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.539 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:47.539 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:47.539 true 00:05:47.539 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:47.539 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.800 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.060 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:48.060 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:48.060 true 00:05:48.060 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:48.320 19:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.320 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.580 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:48.580 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:48.580 true 00:05:48.840 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:48.840 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.840 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.100 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:49.100 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:49.361 true 00:05:49.361 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:49.361 19:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.361 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.623 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:49.623 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:49.885 true 00:05:49.885 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:49.885 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.146 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.146 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:50.146 19:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:50.407 true 00:05:50.407 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:50.407 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.668 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.928 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:50.928 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:50.928 true 00:05:50.928 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:50.928 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.192 19:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.454 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:51.454 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:51.454 true 00:05:51.454 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:51.454 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.715 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.976 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:51.976 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:51.976 true 00:05:52.237 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:52.237 19:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.237 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.498 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:52.498 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:52.760 true 00:05:52.760 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:52.760 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.760 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.022 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:53.022 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:53.283 true 00:05:53.283 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:53.283 19:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.544 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.544 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:53.544 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:53.804 true 00:05:53.804 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:53.804 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.067 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.067 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:54.067 19:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:54.329 true 00:05:54.329 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:54.329 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.590 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.852 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:54.852 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:54.852 true 00:05:54.852 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:54.852 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.114 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.114 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:55.114 19:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:55.375 true 00:05:55.375 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:55.375 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.636 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.636 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:55.636 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:55.896 true 00:05:55.896 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:55.896 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.896 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.157 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:56.157 19:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:56.418 true 00:05:56.418 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:56.418 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.679 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.679 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:56.679 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:56.940 true 00:05:56.940 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:56.940 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.202 19:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.202 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:57.202 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:57.463 true 00:05:57.463 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:57.464 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.725 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.987 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:57.987 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:57.987 true 00:05:57.987 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:57.987 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.248 19:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.520 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:58.520 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:58.520 true 00:05:58.520 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:58.521 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.781 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.781 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:58.781 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:59.042 true 00:05:59.042 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:59.042 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.303 19:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.563 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:59.563 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:59.563 true 00:05:59.563 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:05:59.563 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.824 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.084 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:00.084 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:00.084 true 00:06:00.084 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:00.084 19:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.344 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.605 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:00.605 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:00.605 true 00:06:00.605 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:00.605 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.872 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.168 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:01.168 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:01.168 true 00:06:01.168 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:01.168 19:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.489 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.753 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:01.753 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:01.753 true 00:06:01.753 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:01.753 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.013 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.273 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:02.273 19:44:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:02.273 true 00:06:02.273 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:02.273 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.534 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.795 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:02.795 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:02.795 true 00:06:03.055 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:03.055 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.055 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.315 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:03.315 19:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:03.575 true 00:06:03.575 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:03.575 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.575 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.835 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:03.835 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:04.095 true 00:06:04.095 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:04.095 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.095 19:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.354 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:04.354 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:04.615 true 00:06:04.615 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:04.615 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.876 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.876 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:04.876 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:05.137 true 00:06:05.137 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:05.137 19:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.397 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.658 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:05.658 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:05.658 true 00:06:05.658 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:05.658 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.919 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.180 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:06.180 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:06.180 true 00:06:06.180 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:06.180 19:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.442 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.702 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:06.702 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:06.702 true 00:06:06.702 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:06.702 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.962 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.222 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:07.222 19:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:07.222 true 00:06:07.482 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:07.482 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.482 Initializing NVMe Controllers 00:06:07.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.482 Controller IO queue size 128, less than required. 00:06:07.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:07.482 Initialization complete. Launching workers. 00:06:07.482 ======================================================== 00:06:07.482 Latency(us) 00:06:07.482 Device Information : IOPS MiB/s Average min max 00:06:07.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30386.27 14.84 4212.24 1139.75 10470.19 00:06:07.482 ======================================================== 00:06:07.482 Total : 30386.27 14.84 4212.24 1139.75 10470.19 00:06:07.482 00:06:07.482 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.741 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:07.741 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:08.002 true 00:06:08.002 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3413396 00:06:08.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3413396) - No such process 00:06:08.002 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3413396 00:06:08.002 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.002 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.262 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:08.262 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:08.262 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:08.262 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.262 19:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:08.524 null0 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:08.524 null1 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.524 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:08.784 null2 00:06:08.784 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.784 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.784 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:09.043 null3 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:09.043 null4 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.043 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:09.303 null5 00:06:09.303 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.303 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.303 19:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:09.565 null6 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:09.565 null7 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3423114 3423116 3423118 3423122 3423125 3423127 3423130 3423137 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.565 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.827 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.088 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.349 19:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.349 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.350 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.611 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.871 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.872 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.133 19:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.394 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.394 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.394 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.394 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.394 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.395 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.656 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.918 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.179 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.180 19:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.440 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.701 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.702 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.964 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.226 19:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:13.491 rmmod nvme_tcp 00:06:13.491 rmmod nvme_fabrics 00:06:13.491 rmmod nvme_keyring 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3412598 ']' 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3412598 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3412598 ']' 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3412598 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3412598 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3412598' 00:06:13.491 killing process with pid 3412598 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3412598 00:06:13.491 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3412598 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.754 19:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.670 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.670 00:06:15.670 real 0m49.053s 00:06:15.670 user 3m18.993s 00:06:15.670 sys 0m17.874s 00:06:15.670 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.670 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.670 ************************************ 00:06:15.670 END TEST nvmf_ns_hotplug_stress 00:06:15.670 ************************************ 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 START TEST nvmf_delete_subsystem 00:06:15.931 ************************************ 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.931 * Looking for test storage... 00:06:15.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.931 --rc genhtml_branch_coverage=1 00:06:15.931 --rc genhtml_function_coverage=1 00:06:15.931 --rc genhtml_legend=1 00:06:15.931 --rc geninfo_all_blocks=1 00:06:15.931 --rc geninfo_unexecuted_blocks=1 00:06:15.931 00:06:15.931 ' 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.931 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.194 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.195 19:44:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:24.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:24.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:24.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.333 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:24.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.334 19:44:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:06:24.334 00:06:24.334 --- 10.0.0.2 ping statistics --- 00:06:24.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.334 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:06:24.334 00:06:24.334 --- 10.0.0.1 ping statistics --- 00:06:24.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.334 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3430351 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3430351 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3430351 ']' 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.334 19:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.334 [2024-11-26 19:44:24.430393] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:06:24.334 [2024-11-26 19:44:24.430458] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.334 [2024-11-26 19:44:24.533417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.334 [2024-11-26 19:44:24.586846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.334 [2024-11-26 19:44:24.586900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.334 [2024-11-26 19:44:24.586919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.334 [2024-11-26 19:44:24.586932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.334 [2024-11-26 19:44:24.586940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.334 [2024-11-26 19:44:24.588639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.334 [2024-11-26 19:44:24.588645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.595 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.595 [2024-11-26 19:44:25.309734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 [2024-11-26 19:44:25.334058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 NULL1 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 Delay0 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3430846 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:24.596 19:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:24.856 [2024-11-26 19:44:25.461104] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.770 19:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:26.770 19:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.770 19:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 [2024-11-26 19:44:27.666254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e12c0 is same with the state(6) to be set 00:06:27.032 starting I/O failed: -6 00:06:27.032 starting I/O failed: -6 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Write completed with error (sct=0, sc=8) 00:06:27.032 starting I/O failed: -6 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.032 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 starting I/O failed: -6 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 starting I/O failed: -6 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 starting I/O failed: -6 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 starting I/O failed: -6 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 starting I/O failed: -6 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 [2024-11-26 19:44:27.672464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f973400d490 is same with the state(6) to be set 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Write completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.033 Read completed with error (sct=0, sc=8) 00:06:27.974 [2024-11-26 19:44:28.639902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e29b0 is same with the state(6) to be set 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 [2024-11-26 19:44:28.669990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e14a0 is same with the state(6) to be set 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 [2024-11-26 19:44:28.670461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1860 is same with the state(6) to be set 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 [2024-11-26 19:44:28.673806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f973400d020 is same with the state(6) to be set 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Write completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 Read completed with error (sct=0, sc=8) 00:06:27.974 [2024-11-26 19:44:28.674714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f973400d7c0 is same with the state(6) to be set 00:06:27.974 Initializing NVMe Controllers 00:06:27.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.974 Controller IO queue size 128, less than required. 00:06:27.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:27.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:27.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:27.974 Initialization complete. Launching workers. 00:06:27.974 ======================================================== 00:06:27.974 Latency(us) 00:06:27.974 Device Information : IOPS MiB/s Average min max 00:06:27.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.81 0.08 902576.50 373.42 1043777.78 00:06:27.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.87 0.08 934323.47 320.24 1011537.63 00:06:27.974 ======================================================== 00:06:27.974 Total : 321.69 0.16 917761.97 320.24 1043777.78 00:06:27.974 00:06:27.974 [2024-11-26 19:44:28.675460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e29b0 (9): Bad file descriptor 00:06:27.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:27.974 19:44:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.974 19:44:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:27.974 19:44:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3430846 00:06:27.974 19:44:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3430846 00:06:28.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3430846) - No such process 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3430846 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3430846 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3430846 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.546 [2024-11-26 19:44:29.207872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3432180 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:28.546 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.546 [2024-11-26 19:44:29.313457] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:29.117 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.117 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:29.117 19:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.690 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.690 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:29.690 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.952 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.952 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:29.952 19:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.528 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.528 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:30.528 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.098 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.098 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:31.098 19:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.668 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.668 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:31.668 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.668 Initializing NVMe Controllers 00:06:31.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.668 Controller IO queue size 128, less than required. 00:06:31.668 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:31.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:31.668 Initialization complete. Launching workers. 00:06:31.668 ======================================================== 00:06:31.668 Latency(us) 00:06:31.668 Device Information : IOPS MiB/s Average min max 00:06:31.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002094.80 1000248.62 1006265.21 00:06:31.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002918.53 1000202.12 1008532.22 00:06:31.668 ======================================================== 00:06:31.668 Total : 256.00 0.12 1002506.66 1000202.12 1008532.22 00:06:31.668 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3432180 00:06:32.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3432180) - No such process 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3432180 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:32.240 rmmod nvme_tcp 00:06:32.240 rmmod nvme_fabrics 00:06:32.240 rmmod nvme_keyring 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3430351 ']' 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3430351 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3430351 ']' 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3430351 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3430351 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3430351' 00:06:32.240 killing process with pid 3430351 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3430351 00:06:32.240 19:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3430351 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.240 19:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.794 00:06:34.794 real 0m18.545s 00:06:34.794 user 0m31.056s 00:06:34.794 sys 0m6.879s 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.794 ************************************ 00:06:34.794 END TEST nvmf_delete_subsystem 00:06:34.794 ************************************ 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.794 ************************************ 00:06:34.794 START TEST nvmf_host_management 00:06:34.794 ************************************ 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:34.794 * Looking for test storage... 00:06:34.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.794 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.795 --rc genhtml_branch_coverage=1 00:06:34.795 --rc genhtml_function_coverage=1 00:06:34.795 --rc genhtml_legend=1 00:06:34.795 --rc geninfo_all_blocks=1 00:06:34.795 --rc geninfo_unexecuted_blocks=1 00:06:34.795 00:06:34.795 ' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.795 --rc genhtml_branch_coverage=1 00:06:34.795 --rc genhtml_function_coverage=1 00:06:34.795 --rc genhtml_legend=1 00:06:34.795 --rc geninfo_all_blocks=1 00:06:34.795 --rc geninfo_unexecuted_blocks=1 00:06:34.795 00:06:34.795 ' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.795 --rc genhtml_branch_coverage=1 00:06:34.795 --rc genhtml_function_coverage=1 00:06:34.795 --rc genhtml_legend=1 00:06:34.795 --rc geninfo_all_blocks=1 00:06:34.795 --rc geninfo_unexecuted_blocks=1 00:06:34.795 00:06:34.795 ' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.795 --rc genhtml_branch_coverage=1 00:06:34.795 --rc genhtml_function_coverage=1 00:06:34.795 --rc genhtml_legend=1 00:06:34.795 --rc geninfo_all_blocks=1 00:06:34.795 --rc geninfo_unexecuted_blocks=1 00:06:34.795 00:06:34.795 ' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.795 19:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.936 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:42.937 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:42.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:42.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:42.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.937 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:06:42.938 00:06:42.938 --- 10.0.0.2 ping statistics --- 00:06:42.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.938 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:06:42.938 00:06:42.938 --- 10.0.0.1 ping statistics --- 00:06:42.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.938 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.938 19:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3438643 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3438643 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3438643 ']' 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.938 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.938 [2024-11-26 19:44:43.082168] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:06:42.938 [2024-11-26 19:44:43.082227] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.938 [2024-11-26 19:44:43.184304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.938 [2024-11-26 19:44:43.238404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.938 [2024-11-26 19:44:43.238453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.938 [2024-11-26 19:44:43.238462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.938 [2024-11-26 19:44:43.238469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.938 [2024-11-26 19:44:43.238475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.938 [2024-11-26 19:44:43.240800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.938 [2024-11-26 19:44:43.240956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.938 [2024-11-26 19:44:43.241113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.938 [2024-11-26 19:44:43.241113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.200 [2024-11-26 19:44:43.968342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.200 19:44:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.461 Malloc0 00:06:43.461 [2024-11-26 19:44:44.048627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3439096 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3439096 /var/tmp/bdevperf.sock 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3439096 ']' 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:43.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:43.461 { 00:06:43.461 "params": { 00:06:43.461 "name": "Nvme$subsystem", 00:06:43.461 "trtype": "$TEST_TRANSPORT", 00:06:43.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:43.461 "adrfam": "ipv4", 00:06:43.461 "trsvcid": "$NVMF_PORT", 00:06:43.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:43.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:43.461 "hdgst": ${hdgst:-false}, 00:06:43.461 "ddgst": ${ddgst:-false} 00:06:43.461 }, 00:06:43.461 "method": "bdev_nvme_attach_controller" 00:06:43.461 } 00:06:43.461 EOF 00:06:43.461 )") 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:43.461 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:43.461 "params": { 00:06:43.461 "name": "Nvme0", 00:06:43.461 "trtype": "tcp", 00:06:43.461 "traddr": "10.0.0.2", 00:06:43.461 "adrfam": "ipv4", 00:06:43.461 "trsvcid": "4420", 00:06:43.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:43.461 "hdgst": false, 00:06:43.461 "ddgst": false 00:06:43.461 }, 00:06:43.461 "method": "bdev_nvme_attach_controller" 00:06:43.461 }' 00:06:43.461 [2024-11-26 19:44:44.160292] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:06:43.461 [2024-11-26 19:44:44.160358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439096 ] 00:06:43.461 [2024-11-26 19:44:44.256966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.723 [2024-11-26 19:44:44.310574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.984 Running I/O for 10 seconds... 00:06:44.245 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.245 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:44.245 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:44.245 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.245 19:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.245 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.508 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.508 [2024-11-26 19:44:45.078667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.078998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.508 [2024-11-26 19:44:45.079066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f150 is same with the state(6) to be set 00:06:44.509 [2024-11-26 19:44:45.079348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.509 [2024-11-26 19:44:45.079945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.509 [2024-11-26 19:44:45.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.079969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.079980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.079992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.510 [2024-11-26 19:44:45.080716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:44.510 [2024-11-26 19:44:45.080728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483ee0 is same with the state(6) to be set 00:06:44.510 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.510 [2024-11-26 19:44:45.082051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:44.510 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:44.510 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.510 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.510 task offset: 65536 on job bdev=Nvme0n1 fails 00:06:44.510 00:06:44.510 Latency(us) 00:06:44.510 [2024-11-26T18:44:45.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.511 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:44.511 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:44.511 Verification LBA range: start 0x0 length 0x400 00:06:44.511 Nvme0n1 : 0.40 1295.86 80.99 161.98 0.00 42529.45 4614.83 36918.61 00:06:44.511 [2024-11-26T18:44:45.332Z] =================================================================================================================== 00:06:44.511 [2024-11-26T18:44:45.332Z] Total : 1295.86 80.99 161.98 0.00 42529.45 4614.83 36918.61 00:06:44.511 [2024-11-26 19:44:45.084338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.511 [2024-11-26 19:44:45.084381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226b010 (9): Bad file descriptor 00:06:44.511 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.511 19:44:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:44.511 [2024-11-26 19:44:45.186375] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3439096 00:06:45.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3439096) - No such process 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:45.455 { 00:06:45.455 "params": { 00:06:45.455 "name": "Nvme$subsystem", 00:06:45.455 "trtype": "$TEST_TRANSPORT", 00:06:45.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:45.455 "adrfam": "ipv4", 00:06:45.455 "trsvcid": "$NVMF_PORT", 00:06:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:45.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:45.455 "hdgst": ${hdgst:-false}, 00:06:45.455 "ddgst": ${ddgst:-false} 00:06:45.455 }, 00:06:45.455 "method": "bdev_nvme_attach_controller" 00:06:45.455 } 00:06:45.455 EOF 00:06:45.455 )") 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:45.455 19:44:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:45.455 "params": { 00:06:45.455 "name": "Nvme0", 00:06:45.455 "trtype": "tcp", 00:06:45.455 "traddr": "10.0.0.2", 00:06:45.455 "adrfam": "ipv4", 00:06:45.455 "trsvcid": "4420", 00:06:45.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:45.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:45.455 "hdgst": false, 00:06:45.455 "ddgst": false 00:06:45.455 }, 00:06:45.455 "method": "bdev_nvme_attach_controller" 00:06:45.455 }' 00:06:45.455 [2024-11-26 19:44:46.155315] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:06:45.455 [2024-11-26 19:44:46.155369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439798 ] 00:06:45.455 [2024-11-26 19:44:46.245462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.717 [2024-11-26 19:44:46.280816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.717 Running I/O for 1 seconds... 00:06:47.104 1876.00 IOPS, 117.25 MiB/s 00:06:47.104 Latency(us) 00:06:47.104 [2024-11-26T18:44:47.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:47.104 Verification LBA range: start 0x0 length 0x400 00:06:47.104 Nvme0n1 : 1.05 1848.61 115.54 0.00 0.00 32595.17 2307.41 42816.85 00:06:47.104 [2024-11-26T18:44:47.925Z] =================================================================================================================== 00:06:47.104 [2024-11-26T18:44:47.925Z] Total : 1848.61 115.54 0.00 0.00 32595.17 2307.41 42816.85 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.104 rmmod nvme_tcp 00:06:47.104 rmmod nvme_fabrics 00:06:47.104 rmmod nvme_keyring 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3438643 ']' 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3438643 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3438643 ']' 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3438643 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3438643 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3438643' 00:06:47.104 killing process with pid 3438643 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3438643 00:06:47.104 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3438643 00:06:47.104 [2024-11-26 19:44:47.919304] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.366 19:44:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:49.281 00:06:49.281 real 0m14.858s 00:06:49.281 user 0m23.541s 00:06:49.281 sys 0m6.912s 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.281 ************************************ 00:06:49.281 END TEST nvmf_host_management 00:06:49.281 ************************************ 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.281 19:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.542 ************************************ 00:06:49.542 START TEST nvmf_lvol 00:06:49.542 ************************************ 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:49.542 * Looking for test storage... 00:06:49.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.542 --rc genhtml_branch_coverage=1 00:06:49.542 --rc genhtml_function_coverage=1 00:06:49.542 --rc genhtml_legend=1 00:06:49.542 --rc geninfo_all_blocks=1 00:06:49.542 --rc geninfo_unexecuted_blocks=1 00:06:49.542 00:06:49.542 ' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.542 --rc genhtml_branch_coverage=1 00:06:49.542 --rc genhtml_function_coverage=1 00:06:49.542 --rc genhtml_legend=1 00:06:49.542 --rc geninfo_all_blocks=1 00:06:49.542 --rc geninfo_unexecuted_blocks=1 00:06:49.542 00:06:49.542 ' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.542 --rc genhtml_branch_coverage=1 00:06:49.542 --rc genhtml_function_coverage=1 00:06:49.542 --rc genhtml_legend=1 00:06:49.542 --rc geninfo_all_blocks=1 00:06:49.542 --rc geninfo_unexecuted_blocks=1 00:06:49.542 00:06:49.542 ' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.542 --rc genhtml_branch_coverage=1 00:06:49.542 --rc genhtml_function_coverage=1 00:06:49.542 --rc genhtml_legend=1 00:06:49.542 --rc geninfo_all_blocks=1 00:06:49.542 --rc geninfo_unexecuted_blocks=1 00:06:49.542 00:06:49.542 ' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.542 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.543 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.803 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.804 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.804 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.804 19:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:58.086 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:58.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:58.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:58.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.086 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:06:58.087 00:06:58.087 --- 10.0.0.2 ping statistics --- 00:06:58.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.087 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:06:58.087 00:06:58.087 --- 10.0.0.1 ping statistics --- 00:06:58.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.087 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3446060 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3446060 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3446060 ']' 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.087 19:44:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:58.087 [2024-11-26 19:44:58.026003] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:06:58.087 [2024-11-26 19:44:58.026072] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.087 [2024-11-26 19:44:58.127897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.087 [2024-11-26 19:44:58.183225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.087 [2024-11-26 19:44:58.183275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.087 [2024-11-26 19:44:58.183285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.087 [2024-11-26 19:44:58.183292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.087 [2024-11-26 19:44:58.183298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.087 [2024-11-26 19:44:58.185137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.087 [2024-11-26 19:44:58.185300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.087 [2024-11-26 19:44:58.185302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.087 19:44:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.350 [2024-11-26 19:44:59.050805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.350 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:58.612 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:58.612 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:58.874 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:58.874 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:59.136 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:59.136 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3ac6c2e6-f169-438d-ab78-9c873ecee602 00:06:59.136 19:44:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ac6c2e6-f169-438d-ab78-9c873ecee602 lvol 20 00:06:59.399 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=582ed2b8-7db3-4c25-acdb-1bdaf31aa4b7 00:06:59.399 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:59.660 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 582ed2b8-7db3-4c25-acdb-1bdaf31aa4b7 00:06:59.660 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:59.922 [2024-11-26 19:45:00.633846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.922 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.183 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3446920 00:07:00.183 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:00.183 19:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:01.143 19:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 582ed2b8-7db3-4c25-acdb-1bdaf31aa4b7 MY_SNAPSHOT 00:07:01.404 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f741e053-cb2f-46e6-8011-6c8132df4c03 00:07:01.404 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 582ed2b8-7db3-4c25-acdb-1bdaf31aa4b7 30 00:07:01.666 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f741e053-cb2f-46e6-8011-6c8132df4c03 MY_CLONE 00:07:01.928 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=25f66e18-dd96-4d9a-b939-9ee7e6fb6a52 00:07:01.928 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 25f66e18-dd96-4d9a-b939-9ee7e6fb6a52 00:07:02.190 19:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3446920 00:07:10.334 Initializing NVMe Controllers 00:07:10.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:10.334 Controller IO queue size 128, less than required. 00:07:10.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:10.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:10.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:10.334 Initialization complete. Launching workers. 00:07:10.334 ======================================================== 00:07:10.334 Latency(us) 00:07:10.334 Device Information : IOPS MiB/s Average min max 00:07:10.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16807.80 65.66 7618.31 823.43 63079.01 00:07:10.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15864.10 61.97 8070.42 4044.40 47175.15 00:07:10.334 ======================================================== 00:07:10.334 Total : 32671.90 127.62 7837.83 823.43 63079.01 00:07:10.334 00:07:10.334 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:10.594 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 582ed2b8-7db3-4c25-acdb-1bdaf31aa4b7 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ac6c2e6-f169-438d-ab78-9c873ecee602 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.854 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.854 rmmod nvme_tcp 00:07:11.113 rmmod nvme_fabrics 00:07:11.113 rmmod nvme_keyring 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3446060 ']' 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3446060 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3446060 ']' 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3446060 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3446060 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3446060' 00:07:11.113 killing process with pid 3446060 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3446060 00:07:11.113 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3446060 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.114 19:45:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.659 19:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.660 00:07:13.660 real 0m23.890s 00:07:13.660 user 1m4.215s 00:07:13.660 sys 0m8.681s 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:13.660 ************************************ 00:07:13.660 END TEST nvmf_lvol 00:07:13.660 ************************************ 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.660 ************************************ 00:07:13.660 START TEST nvmf_lvs_grow 00:07:13.660 ************************************ 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:13.660 * Looking for test storage... 00:07:13.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.660 --rc genhtml_branch_coverage=1 00:07:13.660 --rc genhtml_function_coverage=1 00:07:13.660 --rc genhtml_legend=1 00:07:13.660 --rc geninfo_all_blocks=1 00:07:13.660 --rc geninfo_unexecuted_blocks=1 00:07:13.660 00:07:13.660 ' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.660 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.661 19:45:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.805 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:21.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:21.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:21.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:21.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:07:21.806 00:07:21.806 --- 10.0.0.2 ping statistics --- 00:07:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.806 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:07:21.806 00:07:21.806 --- 10.0.0.1 ping statistics --- 00:07:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.806 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3456336 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3456336 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3456336 ']' 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.806 19:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.806 [2024-11-26 19:45:21.931964] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:07:21.806 [2024-11-26 19:45:21.932031] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.806 [2024-11-26 19:45:22.031823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.806 [2024-11-26 19:45:22.083142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.806 [2024-11-26 19:45:22.083204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.806 [2024-11-26 19:45:22.083213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.806 [2024-11-26 19:45:22.083220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.806 [2024-11-26 19:45:22.083226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.806 [2024-11-26 19:45:22.084021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.119 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:22.379 [2024-11-26 19:45:22.956077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.379 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:22.379 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.379 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.379 19:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.379 ************************************ 00:07:22.379 START TEST lvs_grow_clean 00:07:22.379 ************************************ 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.379 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.640 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:22.640 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.640 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:22.640 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:22.640 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.901 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.901 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.901 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 lvol 150 00:07:23.161 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=74b45b66-d531-4cb2-8f7b-c1441fcf90ec 00:07:23.161 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.161 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:23.161 [2024-11-26 19:45:23.976126] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:23.161 [2024-11-26 19:45:23.976209] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:23.422 true 00:07:23.422 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:23.422 19:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:23.422 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.422 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.682 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74b45b66-d531-4cb2-8f7b-c1441fcf90ec 00:07:23.943 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.943 [2024-11-26 19:45:24.706460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.943 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3457050 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3457050 /var/tmp/bdevperf.sock 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3457050 ']' 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.204 19:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:24.204 [2024-11-26 19:45:24.958740] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:07:24.204 [2024-11-26 19:45:24.958809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457050 ] 00:07:24.466 [2024-11-26 19:45:25.050960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.466 [2024-11-26 19:45:25.103934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.037 19:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.037 19:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:25.037 19:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.297 Nvme0n1 00:07:25.297 19:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.559 [ 00:07:25.559 { 00:07:25.559 "name": "Nvme0n1", 00:07:25.559 "aliases": [ 00:07:25.559 "74b45b66-d531-4cb2-8f7b-c1441fcf90ec" 00:07:25.559 ], 00:07:25.559 "product_name": "NVMe disk", 00:07:25.559 "block_size": 4096, 00:07:25.559 "num_blocks": 38912, 00:07:25.559 "uuid": "74b45b66-d531-4cb2-8f7b-c1441fcf90ec", 00:07:25.559 "numa_id": 0, 00:07:25.559 "assigned_rate_limits": { 00:07:25.559 "rw_ios_per_sec": 0, 00:07:25.559 "rw_mbytes_per_sec": 0, 00:07:25.559 "r_mbytes_per_sec": 0, 00:07:25.559 "w_mbytes_per_sec": 0 00:07:25.559 }, 00:07:25.559 "claimed": false, 00:07:25.559 "zoned": false, 00:07:25.559 "supported_io_types": { 00:07:25.559 "read": true, 00:07:25.559 "write": true, 00:07:25.559 "unmap": true, 00:07:25.559 "flush": true, 00:07:25.559 "reset": true, 00:07:25.559 "nvme_admin": true, 00:07:25.559 "nvme_io": true, 00:07:25.559 "nvme_io_md": false, 00:07:25.559 "write_zeroes": true, 00:07:25.559 "zcopy": false, 00:07:25.559 "get_zone_info": false, 00:07:25.559 "zone_management": false, 00:07:25.559 "zone_append": false, 00:07:25.559 "compare": true, 00:07:25.559 "compare_and_write": true, 00:07:25.559 "abort": true, 00:07:25.559 "seek_hole": false, 00:07:25.559 "seek_data": false, 00:07:25.559 "copy": true, 00:07:25.559 "nvme_iov_md": false 00:07:25.559 }, 00:07:25.559 "memory_domains": [ 00:07:25.559 { 00:07:25.559 "dma_device_id": "system", 00:07:25.559 "dma_device_type": 1 00:07:25.559 } 00:07:25.559 ], 00:07:25.559 "driver_specific": { 00:07:25.559 "nvme": [ 00:07:25.559 { 00:07:25.559 "trid": { 00:07:25.559 "trtype": "TCP", 00:07:25.559 "adrfam": "IPv4", 00:07:25.559 "traddr": "10.0.0.2", 00:07:25.559 "trsvcid": "4420", 00:07:25.559 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.559 }, 00:07:25.559 "ctrlr_data": { 00:07:25.559 "cntlid": 1, 00:07:25.559 "vendor_id": "0x8086", 00:07:25.559 "model_number": "SPDK bdev Controller", 00:07:25.559 "serial_number": "SPDK0", 00:07:25.559 "firmware_revision": "25.01", 00:07:25.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.559 "oacs": { 00:07:25.559 "security": 0, 00:07:25.559 "format": 0, 00:07:25.559 "firmware": 0, 00:07:25.559 "ns_manage": 0 00:07:25.559 }, 00:07:25.559 "multi_ctrlr": true, 00:07:25.559 "ana_reporting": false 00:07:25.559 }, 00:07:25.559 "vs": { 00:07:25.559 "nvme_version": "1.3" 00:07:25.559 }, 00:07:25.559 "ns_data": { 00:07:25.559 "id": 1, 00:07:25.559 "can_share": true 00:07:25.559 } 00:07:25.559 } 00:07:25.559 ], 00:07:25.559 "mp_policy": "active_passive" 00:07:25.559 } 00:07:25.559 } 00:07:25.559 ] 00:07:25.559 19:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3457438 00:07:25.559 19:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.559 19:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.559 Running I/O for 10 seconds... 00:07:26.500 Latency(us) 00:07:26.501 [2024-11-26T18:45:27.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.501 Nvme0n1 : 1.00 23921.00 93.44 0.00 0.00 0.00 0.00 0.00 00:07:26.501 [2024-11-26T18:45:27.322Z] =================================================================================================================== 00:07:26.501 [2024-11-26T18:45:27.322Z] Total : 23921.00 93.44 0.00 0.00 0.00 0.00 0.00 00:07:26.501 00:07:27.447 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:27.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.710 Nvme0n1 : 2.00 24591.00 96.06 0.00 0.00 0.00 0.00 0.00 00:07:27.710 [2024-11-26T18:45:28.531Z] =================================================================================================================== 00:07:27.710 [2024-11-26T18:45:28.531Z] Total : 24591.00 96.06 0.00 0.00 0.00 0.00 0.00 00:07:27.710 00:07:27.710 true 00:07:27.710 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:27.710 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.971 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.971 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.971 19:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3457438 00:07:28.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.543 Nvme0n1 : 3.00 24815.00 96.93 0.00 0.00 0.00 0.00 0.00 00:07:28.543 [2024-11-26T18:45:29.364Z] =================================================================================================================== 00:07:28.543 [2024-11-26T18:45:29.364Z] Total : 24815.00 96.93 0.00 0.00 0.00 0.00 0.00 00:07:28.543 00:07:29.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.487 Nvme0n1 : 4.00 24915.00 97.32 0.00 0.00 0.00 0.00 0.00 00:07:29.487 [2024-11-26T18:45:30.308Z] =================================================================================================================== 00:07:29.487 [2024-11-26T18:45:30.308Z] Total : 24915.00 97.32 0.00 0.00 0.00 0.00 0.00 00:07:29.487 00:07:30.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.873 Nvme0n1 : 5.00 24993.40 97.63 0.00 0.00 0.00 0.00 0.00 00:07:30.873 [2024-11-26T18:45:31.694Z] =================================================================================================================== 00:07:30.873 [2024-11-26T18:45:31.694Z] Total : 24993.40 97.63 0.00 0.00 0.00 0.00 0.00 00:07:30.873 00:07:31.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.814 Nvme0n1 : 6.00 25062.33 97.90 0.00 0.00 0.00 0.00 0.00 00:07:31.814 [2024-11-26T18:45:32.635Z] =================================================================================================================== 00:07:31.815 [2024-11-26T18:45:32.636Z] Total : 25062.33 97.90 0.00 0.00 0.00 0.00 0.00 00:07:31.815 00:07:32.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.756 Nvme0n1 : 7.00 25101.14 98.05 0.00 0.00 0.00 0.00 0.00 00:07:32.756 [2024-11-26T18:45:33.577Z] =================================================================================================================== 00:07:32.756 [2024-11-26T18:45:33.577Z] Total : 25101.14 98.05 0.00 0.00 0.00 0.00 0.00 00:07:32.756 00:07:33.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.699 Nvme0n1 : 8.00 25138.75 98.20 0.00 0.00 0.00 0.00 0.00 00:07:33.699 [2024-11-26T18:45:34.520Z] =================================================================================================================== 00:07:33.699 [2024-11-26T18:45:34.520Z] Total : 25138.75 98.20 0.00 0.00 0.00 0.00 0.00 00:07:33.699 00:07:34.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.640 Nvme0n1 : 9.00 25161.00 98.29 0.00 0.00 0.00 0.00 0.00 00:07:34.640 [2024-11-26T18:45:35.461Z] =================================================================================================================== 00:07:34.640 [2024-11-26T18:45:35.461Z] Total : 25161.00 98.29 0.00 0.00 0.00 0.00 0.00 00:07:34.640 00:07:35.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.581 Nvme0n1 : 10.00 25191.40 98.40 0.00 0.00 0.00 0.00 0.00 00:07:35.581 [2024-11-26T18:45:36.402Z] =================================================================================================================== 00:07:35.581 [2024-11-26T18:45:36.402Z] Total : 25191.40 98.40 0.00 0.00 0.00 0.00 0.00 00:07:35.581 00:07:35.581 00:07:35.581 Latency(us) 00:07:35.581 [2024-11-26T18:45:36.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.581 Nvme0n1 : 10.00 25190.00 98.40 0.00 0.00 5077.75 2512.21 17585.49 00:07:35.581 [2024-11-26T18:45:36.402Z] =================================================================================================================== 00:07:35.581 [2024-11-26T18:45:36.402Z] Total : 25190.00 98.40 0.00 0.00 5077.75 2512.21 17585.49 00:07:35.581 { 00:07:35.581 "results": [ 00:07:35.581 { 00:07:35.581 "job": "Nvme0n1", 00:07:35.581 "core_mask": "0x2", 00:07:35.581 "workload": "randwrite", 00:07:35.581 "status": "finished", 00:07:35.581 "queue_depth": 128, 00:07:35.581 "io_size": 4096, 00:07:35.581 "runtime": 10.003055, 00:07:35.581 "iops": 25190.004453639413, 00:07:35.581 "mibps": 98.39845489702896, 00:07:35.581 "io_failed": 0, 00:07:35.581 "io_timeout": 0, 00:07:35.581 "avg_latency_us": 5077.749615136831, 00:07:35.581 "min_latency_us": 2512.213333333333, 00:07:35.581 "max_latency_us": 17585.493333333332 00:07:35.581 } 00:07:35.581 ], 00:07:35.581 "core_count": 1 00:07:35.581 } 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3457050 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3457050 ']' 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3457050 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.581 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457050 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457050' 00:07:35.842 killing process with pid 3457050 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3457050 00:07:35.842 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.842 00:07:35.842 Latency(us) 00:07:35.842 [2024-11-26T18:45:36.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.842 [2024-11-26T18:45:36.663Z] =================================================================================================================== 00:07:35.842 [2024-11-26T18:45:36.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3457050 00:07:35.842 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.104 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.104 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:36.104 19:45:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.365 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.365 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:36.365 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:36.626 [2024-11-26 19:45:37.186664] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:36.626 request: 00:07:36.626 { 00:07:36.626 "uuid": "8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7", 00:07:36.626 "method": "bdev_lvol_get_lvstores", 00:07:36.626 "req_id": 1 00:07:36.626 } 00:07:36.626 Got JSON-RPC error response 00:07:36.626 response: 00:07:36.626 { 00:07:36.626 "code": -19, 00:07:36.626 "message": "No such device" 00:07:36.626 } 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.626 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.887 aio_bdev 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 74b45b66-d531-4cb2-8f7b-c1441fcf90ec 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=74b45b66-d531-4cb2-8f7b-c1441fcf90ec 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.887 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.148 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74b45b66-d531-4cb2-8f7b-c1441fcf90ec -t 2000 00:07:37.148 [ 00:07:37.148 { 00:07:37.148 "name": "74b45b66-d531-4cb2-8f7b-c1441fcf90ec", 00:07:37.148 "aliases": [ 00:07:37.148 "lvs/lvol" 00:07:37.148 ], 00:07:37.148 "product_name": "Logical Volume", 00:07:37.148 "block_size": 4096, 00:07:37.148 "num_blocks": 38912, 00:07:37.148 "uuid": "74b45b66-d531-4cb2-8f7b-c1441fcf90ec", 00:07:37.148 "assigned_rate_limits": { 00:07:37.148 "rw_ios_per_sec": 0, 00:07:37.148 "rw_mbytes_per_sec": 0, 00:07:37.148 "r_mbytes_per_sec": 0, 00:07:37.148 "w_mbytes_per_sec": 0 00:07:37.148 }, 00:07:37.148 "claimed": false, 00:07:37.148 "zoned": false, 00:07:37.148 "supported_io_types": { 00:07:37.148 "read": true, 00:07:37.148 "write": true, 00:07:37.148 "unmap": true, 00:07:37.148 "flush": false, 00:07:37.148 "reset": true, 00:07:37.148 "nvme_admin": false, 00:07:37.148 "nvme_io": false, 00:07:37.148 "nvme_io_md": false, 00:07:37.148 "write_zeroes": true, 00:07:37.148 "zcopy": false, 00:07:37.148 "get_zone_info": false, 00:07:37.148 "zone_management": false, 00:07:37.148 "zone_append": false, 00:07:37.148 "compare": false, 00:07:37.148 "compare_and_write": false, 00:07:37.148 "abort": false, 00:07:37.148 "seek_hole": true, 00:07:37.148 "seek_data": true, 00:07:37.148 "copy": false, 00:07:37.148 "nvme_iov_md": false 00:07:37.148 }, 00:07:37.148 "driver_specific": { 00:07:37.148 "lvol": { 00:07:37.148 "lvol_store_uuid": "8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7", 00:07:37.148 "base_bdev": "aio_bdev", 00:07:37.148 "thin_provision": false, 00:07:37.148 "num_allocated_clusters": 38, 00:07:37.148 "snapshot": false, 00:07:37.148 "clone": false, 00:07:37.148 "esnap_clone": false 00:07:37.148 } 00:07:37.148 } 00:07:37.148 } 00:07:37.148 ] 00:07:37.148 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:37.148 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:37.148 19:45:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:37.409 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:37.409 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:37.409 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:37.669 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:37.669 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74b45b66-d531-4cb2-8f7b-c1441fcf90ec 00:07:37.669 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ecc8a53-d4e6-46b9-b1ad-1c67064d3be7 00:07:37.929 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.190 00:07:38.190 real 0m15.825s 00:07:38.190 user 0m15.438s 00:07:38.190 sys 0m1.504s 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:38.190 ************************************ 00:07:38.190 END TEST lvs_grow_clean 00:07:38.190 ************************************ 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:38.190 ************************************ 00:07:38.190 START TEST lvs_grow_dirty 00:07:38.190 ************************************ 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.190 19:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.450 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:38.450 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:38.711 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:38.711 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:38.711 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e70c6780-3fcb-48fc-b2e1-04968b606491 lvol 150 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.971 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:39.231 [2024-11-26 19:45:39.862497] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:39.231 [2024-11-26 19:45:39.862538] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:39.231 true 00:07:39.231 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:39.231 19:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:39.492 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:39.492 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.492 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:39.753 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:39.753 [2024-11-26 19:45:40.532421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.753 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3462267 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3462267 /var/tmp/bdevperf.sock 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3462267 ']' 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:40.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.013 19:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.013 [2024-11-26 19:45:40.776318] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:07:40.013 [2024-11-26 19:45:40.776365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462267 ] 00:07:40.274 [2024-11-26 19:45:40.859261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.274 [2024-11-26 19:45:40.889211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.844 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.844 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:40.844 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:41.415 Nvme0n1 00:07:41.415 19:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:41.415 [ 00:07:41.415 { 00:07:41.415 "name": "Nvme0n1", 00:07:41.415 "aliases": [ 00:07:41.415 "92cfdedb-9eed-4003-b217-eb096cce4b11" 00:07:41.415 ], 00:07:41.415 "product_name": "NVMe disk", 00:07:41.415 "block_size": 4096, 00:07:41.415 "num_blocks": 38912, 00:07:41.415 "uuid": "92cfdedb-9eed-4003-b217-eb096cce4b11", 00:07:41.415 "numa_id": 0, 00:07:41.415 "assigned_rate_limits": { 00:07:41.415 "rw_ios_per_sec": 0, 00:07:41.415 "rw_mbytes_per_sec": 0, 00:07:41.415 "r_mbytes_per_sec": 0, 00:07:41.415 "w_mbytes_per_sec": 0 00:07:41.415 }, 00:07:41.415 "claimed": false, 00:07:41.415 "zoned": false, 00:07:41.415 "supported_io_types": { 00:07:41.415 "read": true, 00:07:41.415 "write": true, 00:07:41.415 "unmap": true, 00:07:41.415 "flush": true, 00:07:41.415 "reset": true, 00:07:41.415 "nvme_admin": true, 00:07:41.415 "nvme_io": true, 00:07:41.415 "nvme_io_md": false, 00:07:41.415 "write_zeroes": true, 00:07:41.415 "zcopy": false, 00:07:41.415 "get_zone_info": false, 00:07:41.415 "zone_management": false, 00:07:41.415 "zone_append": false, 00:07:41.415 "compare": true, 00:07:41.415 "compare_and_write": true, 00:07:41.415 "abort": true, 00:07:41.415 "seek_hole": false, 00:07:41.415 "seek_data": false, 00:07:41.415 "copy": true, 00:07:41.415 "nvme_iov_md": false 00:07:41.415 }, 00:07:41.415 "memory_domains": [ 00:07:41.415 { 00:07:41.415 "dma_device_id": "system", 00:07:41.415 "dma_device_type": 1 00:07:41.415 } 00:07:41.415 ], 00:07:41.415 "driver_specific": { 00:07:41.415 "nvme": [ 00:07:41.415 { 00:07:41.415 "trid": { 00:07:41.415 "trtype": "TCP", 00:07:41.415 "adrfam": "IPv4", 00:07:41.415 "traddr": "10.0.0.2", 00:07:41.415 "trsvcid": "4420", 00:07:41.415 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:41.415 }, 00:07:41.415 "ctrlr_data": { 00:07:41.415 "cntlid": 1, 00:07:41.415 "vendor_id": "0x8086", 00:07:41.415 "model_number": "SPDK bdev Controller", 00:07:41.415 "serial_number": "SPDK0", 00:07:41.415 "firmware_revision": "25.01", 00:07:41.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.415 "oacs": { 00:07:41.415 "security": 0, 00:07:41.415 "format": 0, 00:07:41.415 "firmware": 0, 00:07:41.415 "ns_manage": 0 00:07:41.415 }, 00:07:41.415 "multi_ctrlr": true, 00:07:41.415 "ana_reporting": false 00:07:41.415 }, 00:07:41.415 "vs": { 00:07:41.415 "nvme_version": "1.3" 00:07:41.415 }, 00:07:41.415 "ns_data": { 00:07:41.415 "id": 1, 00:07:41.415 "can_share": true 00:07:41.415 } 00:07:41.415 } 00:07:41.415 ], 00:07:41.416 "mp_policy": "active_passive" 00:07:41.416 } 00:07:41.416 } 00:07:41.416 ] 00:07:41.416 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3462685 00:07:41.416 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:41.416 19:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.416 Running I/O for 10 seconds... 00:07:42.800 Latency(us) 00:07:42.800 [2024-11-26T18:45:43.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.800 Nvme0n1 : 1.00 24108.00 94.17 0.00 0.00 0.00 0.00 0.00 00:07:42.800 [2024-11-26T18:45:43.621Z] =================================================================================================================== 00:07:42.800 [2024-11-26T18:45:43.621Z] Total : 24108.00 94.17 0.00 0.00 0.00 0.00 0.00 00:07:42.800 00:07:43.371 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:43.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.633 Nvme0n1 : 2.00 24342.00 95.09 0.00 0.00 0.00 0.00 0.00 00:07:43.633 [2024-11-26T18:45:44.454Z] =================================================================================================================== 00:07:43.633 [2024-11-26T18:45:44.454Z] Total : 24342.00 95.09 0.00 0.00 0.00 0.00 0.00 00:07:43.633 00:07:43.633 true 00:07:43.633 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:43.633 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:43.893 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:43.893 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:43.893 19:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3462685 00:07:44.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.466 Nvme0n1 : 3.00 24417.33 95.38 0.00 0.00 0.00 0.00 0.00 00:07:44.466 [2024-11-26T18:45:45.287Z] =================================================================================================================== 00:07:44.466 [2024-11-26T18:45:45.287Z] Total : 24417.33 95.38 0.00 0.00 0.00 0.00 0.00 00:07:44.466 00:07:45.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.855 Nvme0n1 : 4.00 24465.00 95.57 0.00 0.00 0.00 0.00 0.00 00:07:45.855 [2024-11-26T18:45:46.676Z] =================================================================================================================== 00:07:45.855 [2024-11-26T18:45:46.676Z] Total : 24465.00 95.57 0.00 0.00 0.00 0.00 0.00 00:07:45.855 00:07:46.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.428 Nvme0n1 : 5.00 24503.20 95.72 0.00 0.00 0.00 0.00 0.00 00:07:46.428 [2024-11-26T18:45:47.249Z] =================================================================================================================== 00:07:46.428 [2024-11-26T18:45:47.249Z] Total : 24503.20 95.72 0.00 0.00 0.00 0.00 0.00 00:07:46.428 00:07:47.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.812 Nvme0n1 : 6.00 24531.33 95.83 0.00 0.00 0.00 0.00 0.00 00:07:47.812 [2024-11-26T18:45:48.633Z] =================================================================================================================== 00:07:47.812 [2024-11-26T18:45:48.633Z] Total : 24531.33 95.83 0.00 0.00 0.00 0.00 0.00 00:07:47.812 00:07:48.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.433 Nvme0n1 : 7.00 24565.14 95.96 0.00 0.00 0.00 0.00 0.00 00:07:48.433 [2024-11-26T18:45:49.255Z] =================================================================================================================== 00:07:48.434 [2024-11-26T18:45:49.255Z] Total : 24565.14 95.96 0.00 0.00 0.00 0.00 0.00 00:07:48.434 00:07:49.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.437 Nvme0n1 : 8.00 24591.50 96.06 0.00 0.00 0.00 0.00 0.00 00:07:49.437 [2024-11-26T18:45:50.258Z] =================================================================================================================== 00:07:49.437 [2024-11-26T18:45:50.258Z] Total : 24591.50 96.06 0.00 0.00 0.00 0.00 0.00 00:07:49.437 00:07:50.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.819 Nvme0n1 : 9.00 24615.56 96.15 0.00 0.00 0.00 0.00 0.00 00:07:50.819 [2024-11-26T18:45:51.640Z] =================================================================================================================== 00:07:50.819 [2024-11-26T18:45:51.640Z] Total : 24615.56 96.15 0.00 0.00 0.00 0.00 0.00 00:07:50.819 00:07:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.759 Nvme0n1 : 10.00 24633.20 96.22 0.00 0.00 0.00 0.00 0.00 00:07:51.759 [2024-11-26T18:45:52.580Z] =================================================================================================================== 00:07:51.759 [2024-11-26T18:45:52.580Z] Total : 24633.20 96.22 0.00 0.00 0.00 0.00 0.00 00:07:51.759 00:07:51.759 00:07:51.759 Latency(us) 00:07:51.759 [2024-11-26T18:45:52.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.759 Nvme0n1 : 10.01 24633.40 96.22 0.00 0.00 5192.60 4014.08 14745.60 00:07:51.759 [2024-11-26T18:45:52.580Z] =================================================================================================================== 00:07:51.759 [2024-11-26T18:45:52.580Z] Total : 24633.40 96.22 0.00 0.00 5192.60 4014.08 14745.60 00:07:51.759 { 00:07:51.759 "results": [ 00:07:51.759 { 00:07:51.759 "job": "Nvme0n1", 00:07:51.759 "core_mask": "0x2", 00:07:51.759 "workload": "randwrite", 00:07:51.759 "status": "finished", 00:07:51.759 "queue_depth": 128, 00:07:51.759 "io_size": 4096, 00:07:51.759 "runtime": 10.005115, 00:07:51.759 "iops": 24633.40001589187, 00:07:51.759 "mibps": 96.22421881207762, 00:07:51.759 "io_failed": 0, 00:07:51.759 "io_timeout": 0, 00:07:51.759 "avg_latency_us": 5192.603425356379, 00:07:51.759 "min_latency_us": 4014.08, 00:07:51.759 "max_latency_us": 14745.6 00:07:51.759 } 00:07:51.759 ], 00:07:51.759 "core_count": 1 00:07:51.759 } 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3462267 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3462267 ']' 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3462267 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3462267 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3462267' 00:07:51.759 killing process with pid 3462267 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3462267 00:07:51.759 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.759 00:07:51.759 Latency(us) 00:07:51.759 [2024-11-26T18:45:52.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.759 [2024-11-26T18:45:52.580Z] =================================================================================================================== 00:07:51.759 [2024-11-26T18:45:52.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3462267 00:07:51.759 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.019 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.019 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:52.019 19:45:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3456336 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3456336 00:07:52.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3456336 Killed "${NVMF_APP[@]}" "$@" 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3466087 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3466087 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3466087 ']' 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.279 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.538 [2024-11-26 19:45:53.123177] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:07:52.539 [2024-11-26 19:45:53.123233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.539 [2024-11-26 19:45:53.214006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.539 [2024-11-26 19:45:53.243200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.539 [2024-11-26 19:45:53.243227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.539 [2024-11-26 19:45:53.243232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.539 [2024-11-26 19:45:53.243237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.539 [2024-11-26 19:45:53.243244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.539 [2024-11-26 19:45:53.243686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.110 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.110 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:53.110 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.110 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.110 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:53.370 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.370 19:45:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.370 [2024-11-26 19:45:54.106055] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:53.370 [2024-11-26 19:45:54.106131] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:53.370 [2024-11-26 19:45:54.106153] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.370 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:53.630 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92cfdedb-9eed-4003-b217-eb096cce4b11 -t 2000 00:07:53.890 [ 00:07:53.890 { 00:07:53.890 "name": "92cfdedb-9eed-4003-b217-eb096cce4b11", 00:07:53.890 "aliases": [ 00:07:53.890 "lvs/lvol" 00:07:53.890 ], 00:07:53.890 "product_name": "Logical Volume", 00:07:53.890 "block_size": 4096, 00:07:53.890 "num_blocks": 38912, 00:07:53.890 "uuid": "92cfdedb-9eed-4003-b217-eb096cce4b11", 00:07:53.890 "assigned_rate_limits": { 00:07:53.890 "rw_ios_per_sec": 0, 00:07:53.890 "rw_mbytes_per_sec": 0, 00:07:53.890 "r_mbytes_per_sec": 0, 00:07:53.890 "w_mbytes_per_sec": 0 00:07:53.890 }, 00:07:53.890 "claimed": false, 00:07:53.890 "zoned": false, 00:07:53.890 "supported_io_types": { 00:07:53.890 "read": true, 00:07:53.890 "write": true, 00:07:53.890 "unmap": true, 00:07:53.890 "flush": false, 00:07:53.890 "reset": true, 00:07:53.890 "nvme_admin": false, 00:07:53.890 "nvme_io": false, 00:07:53.890 "nvme_io_md": false, 00:07:53.890 "write_zeroes": true, 00:07:53.890 "zcopy": false, 00:07:53.890 "get_zone_info": false, 00:07:53.890 "zone_management": false, 00:07:53.890 "zone_append": false, 00:07:53.890 "compare": false, 00:07:53.890 "compare_and_write": false, 00:07:53.890 "abort": false, 00:07:53.890 "seek_hole": true, 00:07:53.890 "seek_data": true, 00:07:53.890 "copy": false, 00:07:53.890 "nvme_iov_md": false 00:07:53.890 }, 00:07:53.890 "driver_specific": { 00:07:53.890 "lvol": { 00:07:53.890 "lvol_store_uuid": "e70c6780-3fcb-48fc-b2e1-04968b606491", 00:07:53.890 "base_bdev": "aio_bdev", 00:07:53.890 "thin_provision": false, 00:07:53.890 "num_allocated_clusters": 38, 00:07:53.890 "snapshot": false, 00:07:53.890 "clone": false, 00:07:53.890 "esnap_clone": false 00:07:53.890 } 00:07:53.890 } 00:07:53.890 } 00:07:53.890 ] 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:53.890 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:54.149 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:54.149 19:45:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:54.410 [2024-11-26 19:45:54.970740] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:54.410 request: 00:07:54.410 { 00:07:54.410 "uuid": "e70c6780-3fcb-48fc-b2e1-04968b606491", 00:07:54.410 "method": "bdev_lvol_get_lvstores", 00:07:54.410 "req_id": 1 00:07:54.410 } 00:07:54.410 Got JSON-RPC error response 00:07:54.410 response: 00:07:54.410 { 00:07:54.410 "code": -19, 00:07:54.410 "message": "No such device" 00:07:54.410 } 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.410 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.670 aio_bdev 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.670 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.931 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92cfdedb-9eed-4003-b217-eb096cce4b11 -t 2000 00:07:54.931 [ 00:07:54.931 { 00:07:54.931 "name": "92cfdedb-9eed-4003-b217-eb096cce4b11", 00:07:54.931 "aliases": [ 00:07:54.931 "lvs/lvol" 00:07:54.931 ], 00:07:54.931 "product_name": "Logical Volume", 00:07:54.931 "block_size": 4096, 00:07:54.931 "num_blocks": 38912, 00:07:54.931 "uuid": "92cfdedb-9eed-4003-b217-eb096cce4b11", 00:07:54.931 "assigned_rate_limits": { 00:07:54.931 "rw_ios_per_sec": 0, 00:07:54.931 "rw_mbytes_per_sec": 0, 00:07:54.931 "r_mbytes_per_sec": 0, 00:07:54.931 "w_mbytes_per_sec": 0 00:07:54.931 }, 00:07:54.931 "claimed": false, 00:07:54.931 "zoned": false, 00:07:54.931 "supported_io_types": { 00:07:54.931 "read": true, 00:07:54.931 "write": true, 00:07:54.931 "unmap": true, 00:07:54.931 "flush": false, 00:07:54.931 "reset": true, 00:07:54.931 "nvme_admin": false, 00:07:54.931 "nvme_io": false, 00:07:54.931 "nvme_io_md": false, 00:07:54.931 "write_zeroes": true, 00:07:54.931 "zcopy": false, 00:07:54.931 "get_zone_info": false, 00:07:54.931 "zone_management": false, 00:07:54.931 "zone_append": false, 00:07:54.931 "compare": false, 00:07:54.931 "compare_and_write": false, 00:07:54.931 "abort": false, 00:07:54.931 "seek_hole": true, 00:07:54.931 "seek_data": true, 00:07:54.931 "copy": false, 00:07:54.931 "nvme_iov_md": false 00:07:54.931 }, 00:07:54.931 "driver_specific": { 00:07:54.931 "lvol": { 00:07:54.931 "lvol_store_uuid": "e70c6780-3fcb-48fc-b2e1-04968b606491", 00:07:54.931 "base_bdev": "aio_bdev", 00:07:54.931 "thin_provision": false, 00:07:54.931 "num_allocated_clusters": 38, 00:07:54.931 "snapshot": false, 00:07:54.931 "clone": false, 00:07:54.931 "esnap_clone": false 00:07:54.931 } 00:07:54.931 } 00:07:54.931 } 00:07:54.931 ] 00:07:54.931 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:54.931 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:54.931 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:55.192 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:55.192 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:55.192 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:55.192 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:55.192 19:45:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92cfdedb-9eed-4003-b217-eb096cce4b11 00:07:55.452 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e70c6780-3fcb-48fc-b2e1-04968b606491 00:07:55.711 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.711 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.972 00:07:55.972 real 0m17.622s 00:07:55.972 user 0m45.714s 00:07:55.972 sys 0m3.301s 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.972 ************************************ 00:07:55.972 END TEST lvs_grow_dirty 00:07:55.972 ************************************ 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:55.972 nvmf_trace.0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.972 rmmod nvme_tcp 00:07:55.972 rmmod nvme_fabrics 00:07:55.972 rmmod nvme_keyring 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3466087 ']' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3466087 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3466087 ']' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3466087 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.972 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466087 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466087' 00:07:56.231 killing process with pid 3466087 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3466087 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3466087 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.231 19:45:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.770 00:07:58.770 real 0m44.929s 00:07:58.770 user 1m7.578s 00:07:58.770 sys 0m10.961s 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.770 ************************************ 00:07:58.770 END TEST nvmf_lvs_grow 00:07:58.770 ************************************ 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.770 ************************************ 00:07:58.770 START TEST nvmf_bdev_io_wait 00:07:58.770 ************************************ 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:58.770 * Looking for test storage... 00:07:58.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.770 --rc genhtml_branch_coverage=1 00:07:58.770 --rc genhtml_function_coverage=1 00:07:58.770 --rc genhtml_legend=1 00:07:58.770 --rc geninfo_all_blocks=1 00:07:58.770 --rc geninfo_unexecuted_blocks=1 00:07:58.770 00:07:58.770 ' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.770 --rc genhtml_branch_coverage=1 00:07:58.770 --rc genhtml_function_coverage=1 00:07:58.770 --rc genhtml_legend=1 00:07:58.770 --rc geninfo_all_blocks=1 00:07:58.770 --rc geninfo_unexecuted_blocks=1 00:07:58.770 00:07:58.770 ' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.770 --rc genhtml_branch_coverage=1 00:07:58.770 --rc genhtml_function_coverage=1 00:07:58.770 --rc genhtml_legend=1 00:07:58.770 --rc geninfo_all_blocks=1 00:07:58.770 --rc geninfo_unexecuted_blocks=1 00:07:58.770 00:07:58.770 ' 00:07:58.770 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.770 --rc genhtml_branch_coverage=1 00:07:58.770 --rc genhtml_function_coverage=1 00:07:58.771 --rc genhtml_legend=1 00:07:58.771 --rc geninfo_all_blocks=1 00:07:58.771 --rc geninfo_unexecuted_blocks=1 00:07:58.771 00:07:58.771 ' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.771 19:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:06.916 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.916 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:06.917 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:06.917 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:06.917 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:08:06.917 00:08:06.917 --- 10.0.0.2 ping statistics --- 00:08:06.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.917 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:08:06.917 00:08:06.917 --- 10.0.0.1 ping statistics --- 00:08:06.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.917 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3472510 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3472510 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:06.917 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3472510 ']' 00:08:06.918 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.918 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.918 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.918 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.918 19:46:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.918 [2024-11-26 19:46:06.996113] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:06.918 [2024-11-26 19:46:06.996192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.918 [2024-11-26 19:46:07.098321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.918 [2024-11-26 19:46:07.154482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.918 [2024-11-26 19:46:07.154535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.918 [2024-11-26 19:46:07.154545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.918 [2024-11-26 19:46:07.154554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.918 [2024-11-26 19:46:07.154562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.918 [2024-11-26 19:46:07.156953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.918 [2024-11-26 19:46:07.157115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.918 [2024-11-26 19:46:07.157284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.918 [2024-11-26 19:46:07.157284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 [2024-11-26 19:46:07.944231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 Malloc0 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.179 19:46:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.441 [2024-11-26 19:46:08.009773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3472670 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3472673 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.441 { 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme$subsystem", 00:08:07.441 "trtype": "$TEST_TRANSPORT", 00:08:07.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "$NVMF_PORT", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.441 "hdgst": ${hdgst:-false}, 00:08:07.441 "ddgst": ${ddgst:-false} 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 } 00:08:07.441 EOF 00:08:07.441 )") 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3472675 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3472678 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.441 { 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme$subsystem", 00:08:07.441 "trtype": "$TEST_TRANSPORT", 00:08:07.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "$NVMF_PORT", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.441 "hdgst": ${hdgst:-false}, 00:08:07.441 "ddgst": ${ddgst:-false} 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 } 00:08:07.441 EOF 00:08:07.441 )") 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.441 { 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme$subsystem", 00:08:07.441 "trtype": "$TEST_TRANSPORT", 00:08:07.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "$NVMF_PORT", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.441 "hdgst": ${hdgst:-false}, 00:08:07.441 "ddgst": ${ddgst:-false} 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 } 00:08:07.441 EOF 00:08:07.441 )") 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.441 { 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme$subsystem", 00:08:07.441 "trtype": "$TEST_TRANSPORT", 00:08:07.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "$NVMF_PORT", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.441 "hdgst": ${hdgst:-false}, 00:08:07.441 "ddgst": ${ddgst:-false} 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 } 00:08:07.441 EOF 00:08:07.441 )") 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3472670 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme1", 00:08:07.441 "trtype": "tcp", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "4420", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.441 "hdgst": false, 00:08:07.441 "ddgst": false 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 }' 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme1", 00:08:07.441 "trtype": "tcp", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "4420", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.441 "hdgst": false, 00:08:07.441 "ddgst": false 00:08:07.441 }, 00:08:07.441 "method": "bdev_nvme_attach_controller" 00:08:07.441 }' 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.441 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.441 "params": { 00:08:07.441 "name": "Nvme1", 00:08:07.441 "trtype": "tcp", 00:08:07.441 "traddr": "10.0.0.2", 00:08:07.441 "adrfam": "ipv4", 00:08:07.441 "trsvcid": "4420", 00:08:07.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.442 "hdgst": false, 00:08:07.442 "ddgst": false 00:08:07.442 }, 00:08:07.442 "method": "bdev_nvme_attach_controller" 00:08:07.442 }' 00:08:07.442 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:07.442 19:46:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.442 "params": { 00:08:07.442 "name": "Nvme1", 00:08:07.442 "trtype": "tcp", 00:08:07.442 "traddr": "10.0.0.2", 00:08:07.442 "adrfam": "ipv4", 00:08:07.442 "trsvcid": "4420", 00:08:07.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.442 "hdgst": false, 00:08:07.442 "ddgst": false 00:08:07.442 }, 00:08:07.442 "method": "bdev_nvme_attach_controller" 00:08:07.442 }' 00:08:07.442 [2024-11-26 19:46:08.068168] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:07.442 [2024-11-26 19:46:08.068235] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:07.442 [2024-11-26 19:46:08.069235] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:07.442 [2024-11-26 19:46:08.069301] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:07.442 [2024-11-26 19:46:08.073897] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:07.442 [2024-11-26 19:46:08.073960] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:07.442 [2024-11-26 19:46:08.074956] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:07.442 [2024-11-26 19:46:08.075021] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:07.703 [2024-11-26 19:46:08.264682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.703 [2024-11-26 19:46:08.304822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.703 [2024-11-26 19:46:08.331423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.703 [2024-11-26 19:46:08.370986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:07.703 [2024-11-26 19:46:08.401271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.703 [2024-11-26 19:46:08.440970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:07.703 [2024-11-26 19:46:08.494650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.964 [2024-11-26 19:46:08.534629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:07.964 Running I/O for 1 seconds... 00:08:07.964 Running I/O for 1 seconds... 00:08:07.964 Running I/O for 1 seconds... 00:08:07.964 Running I/O for 1 seconds... 00:08:08.906 12130.00 IOPS, 47.38 MiB/s 00:08:08.906 Latency(us) 00:08:08.906 [2024-11-26T18:46:09.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.906 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:08.906 Nvme1n1 : 1.01 12176.62 47.56 0.00 0.00 10473.42 5352.11 16602.45 00:08:08.906 [2024-11-26T18:46:09.727Z] =================================================================================================================== 00:08:08.906 [2024-11-26T18:46:09.727Z] Total : 12176.62 47.56 0.00 0.00 10473.42 5352.11 16602.45 00:08:08.906 12566.00 IOPS, 49.09 MiB/s 00:08:08.906 Latency(us) 00:08:08.906 [2024-11-26T18:46:09.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.906 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:08.906 Nvme1n1 : 1.01 12640.12 49.38 0.00 0.00 10094.56 4450.99 20097.71 00:08:08.906 [2024-11-26T18:46:09.727Z] =================================================================================================================== 00:08:08.906 [2024-11-26T18:46:09.727Z] Total : 12640.12 49.38 0.00 0.00 10094.56 4450.99 20097.71 00:08:08.906 173680.00 IOPS, 678.44 MiB/s 00:08:08.906 Latency(us) 00:08:08.906 [2024-11-26T18:46:09.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.907 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:08.907 Nvme1n1 : 1.00 173318.48 677.03 0.00 0.00 734.28 314.03 2048.00 00:08:08.907 [2024-11-26T18:46:09.728Z] =================================================================================================================== 00:08:08.907 [2024-11-26T18:46:09.728Z] Total : 173318.48 677.03 0.00 0.00 734.28 314.03 2048.00 00:08:09.168 10875.00 IOPS, 42.48 MiB/s 00:08:09.168 Latency(us) 00:08:09.168 [2024-11-26T18:46:09.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.168 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:09.168 Nvme1n1 : 1.01 10955.68 42.80 0.00 0.00 11643.78 4696.75 21408.43 00:08:09.168 [2024-11-26T18:46:09.989Z] =================================================================================================================== 00:08:09.168 [2024-11-26T18:46:09.989Z] Total : 10955.68 42.80 0.00 0.00 11643.78 4696.75 21408.43 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3472673 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3472675 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3472678 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.168 rmmod nvme_tcp 00:08:09.168 rmmod nvme_fabrics 00:08:09.168 rmmod nvme_keyring 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3472510 ']' 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3472510 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3472510 ']' 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3472510 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.168 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472510 00:08:09.428 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.428 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.428 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472510' 00:08:09.428 killing process with pid 3472510 00:08:09.428 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3472510 00:08:09.428 19:46:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3472510 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.428 19:46:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.977 00:08:11.977 real 0m13.149s 00:08:11.977 user 0m19.005s 00:08:11.977 sys 0m7.637s 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.977 ************************************ 00:08:11.977 END TEST nvmf_bdev_io_wait 00:08:11.977 ************************************ 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.977 ************************************ 00:08:11.977 START TEST nvmf_queue_depth 00:08:11.977 ************************************ 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:11.977 * Looking for test storage... 00:08:11.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.977 --rc genhtml_branch_coverage=1 00:08:11.977 --rc genhtml_function_coverage=1 00:08:11.977 --rc genhtml_legend=1 00:08:11.977 --rc geninfo_all_blocks=1 00:08:11.977 --rc geninfo_unexecuted_blocks=1 00:08:11.977 00:08:11.977 ' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.977 --rc genhtml_branch_coverage=1 00:08:11.977 --rc genhtml_function_coverage=1 00:08:11.977 --rc genhtml_legend=1 00:08:11.977 --rc geninfo_all_blocks=1 00:08:11.977 --rc geninfo_unexecuted_blocks=1 00:08:11.977 00:08:11.977 ' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.977 --rc genhtml_branch_coverage=1 00:08:11.977 --rc genhtml_function_coverage=1 00:08:11.977 --rc genhtml_legend=1 00:08:11.977 --rc geninfo_all_blocks=1 00:08:11.977 --rc geninfo_unexecuted_blocks=1 00:08:11.977 00:08:11.977 ' 00:08:11.977 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.977 --rc genhtml_branch_coverage=1 00:08:11.977 --rc genhtml_function_coverage=1 00:08:11.978 --rc genhtml_legend=1 00:08:11.978 --rc geninfo_all_blocks=1 00:08:11.978 --rc geninfo_unexecuted_blocks=1 00:08:11.978 00:08:11.978 ' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.978 19:46:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:20.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:20.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:20.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.125 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:20.126 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.126 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:08:20.126 00:08:20.126 --- 10.0.0.2 ping statistics --- 00:08:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.126 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:08:20.126 00:08:20.126 --- 10.0.0.1 ping statistics --- 00:08:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.126 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3478479 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3478479 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3478479 ']' 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:20.126 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.126 [2024-11-26 19:46:20.217373] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:20.126 [2024-11-26 19:46:20.217442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.126 [2024-11-26 19:46:20.321025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.126 [2024-11-26 19:46:20.371513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.126 [2024-11-26 19:46:20.371563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.126 [2024-11-26 19:46:20.371572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.126 [2024-11-26 19:46:20.371585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.126 [2024-11-26 19:46:20.371591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.126 [2024-11-26 19:46:20.372465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 [2024-11-26 19:46:21.075786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 Malloc0 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 [2024-11-26 19:46:21.136986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3478916 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3478916 /var/tmp/bdevperf.sock 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3478916 ']' 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.387 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.387 [2024-11-26 19:46:21.196462] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:20.387 [2024-11-26 19:46:21.196529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478916 ] 00:08:20.648 [2024-11-26 19:46:21.262994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.648 [2024-11-26 19:46:21.310914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.648 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.648 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:20.648 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:20.648 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.648 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.908 NVMe0n1 00:08:20.908 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.908 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.908 Running I/O for 10 seconds... 00:08:23.237 9113.00 IOPS, 35.60 MiB/s [2024-11-26T18:46:25.001Z] 8794.00 IOPS, 34.35 MiB/s [2024-11-26T18:46:25.945Z] 9216.00 IOPS, 36.00 MiB/s [2024-11-26T18:46:26.885Z] 9920.00 IOPS, 38.75 MiB/s [2024-11-26T18:46:27.827Z] 10643.00 IOPS, 41.57 MiB/s [2024-11-26T18:46:28.769Z] 11081.83 IOPS, 43.29 MiB/s [2024-11-26T18:46:29.725Z] 11408.00 IOPS, 44.56 MiB/s [2024-11-26T18:46:31.107Z] 11699.62 IOPS, 45.70 MiB/s [2024-11-26T18:46:32.049Z] 11937.22 IOPS, 46.63 MiB/s [2024-11-26T18:46:32.049Z] 12127.30 IOPS, 47.37 MiB/s 00:08:31.228 Latency(us) 00:08:31.228 [2024-11-26T18:46:32.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.228 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:31.228 Verification LBA range: start 0x0 length 0x4000 00:08:31.228 NVMe0n1 : 10.06 12159.01 47.50 0.00 0.00 83918.14 17148.59 73400.32 00:08:31.228 [2024-11-26T18:46:32.049Z] =================================================================================================================== 00:08:31.228 [2024-11-26T18:46:32.049Z] Total : 12159.01 47.50 0.00 0.00 83918.14 17148.59 73400.32 00:08:31.228 { 00:08:31.228 "results": [ 00:08:31.228 { 00:08:31.228 "job": "NVMe0n1", 00:08:31.228 "core_mask": "0x1", 00:08:31.228 "workload": "verify", 00:08:31.228 "status": "finished", 00:08:31.228 "verify_range": { 00:08:31.228 "start": 0, 00:08:31.228 "length": 16384 00:08:31.228 }, 00:08:31.228 "queue_depth": 1024, 00:08:31.228 "io_size": 4096, 00:08:31.228 "runtime": 10.055589, 00:08:31.228 "iops": 12159.00928329509, 00:08:31.228 "mibps": 47.49613001287145, 00:08:31.228 "io_failed": 0, 00:08:31.228 "io_timeout": 0, 00:08:31.228 "avg_latency_us": 83918.1446633842, 00:08:31.228 "min_latency_us": 17148.586666666666, 00:08:31.228 "max_latency_us": 73400.32 00:08:31.228 } 00:08:31.228 ], 00:08:31.228 "core_count": 1 00:08:31.228 } 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3478916 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3478916 ']' 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3478916 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3478916 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3478916' 00:08:31.228 killing process with pid 3478916 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3478916 00:08:31.228 Received shutdown signal, test time was about 10.000000 seconds 00:08:31.228 00:08:31.228 Latency(us) 00:08:31.228 [2024-11-26T18:46:32.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.228 [2024-11-26T18:46:32.049Z] =================================================================================================================== 00:08:31.228 [2024-11-26T18:46:32.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3478916 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.228 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.228 rmmod nvme_tcp 00:08:31.228 rmmod nvme_fabrics 00:08:31.228 rmmod nvme_keyring 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3478479 ']' 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3478479 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3478479 ']' 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3478479 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.228 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3478479 00:08:31.489 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3478479' 00:08:31.490 killing process with pid 3478479 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3478479 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3478479 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.490 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.033 00:08:34.033 real 0m21.980s 00:08:34.033 user 0m24.412s 00:08:34.033 sys 0m7.117s 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 ************************************ 00:08:34.033 END TEST nvmf_queue_depth 00:08:34.033 ************************************ 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 ************************************ 00:08:34.033 START TEST nvmf_target_multipath 00:08:34.033 ************************************ 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:34.033 * Looking for test storage... 00:08:34.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.033 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.033 --rc genhtml_branch_coverage=1 00:08:34.033 --rc genhtml_function_coverage=1 00:08:34.033 --rc genhtml_legend=1 00:08:34.034 --rc geninfo_all_blocks=1 00:08:34.034 --rc geninfo_unexecuted_blocks=1 00:08:34.034 00:08:34.034 ' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.034 --rc genhtml_branch_coverage=1 00:08:34.034 --rc genhtml_function_coverage=1 00:08:34.034 --rc genhtml_legend=1 00:08:34.034 --rc geninfo_all_blocks=1 00:08:34.034 --rc geninfo_unexecuted_blocks=1 00:08:34.034 00:08:34.034 ' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.034 --rc genhtml_branch_coverage=1 00:08:34.034 --rc genhtml_function_coverage=1 00:08:34.034 --rc genhtml_legend=1 00:08:34.034 --rc geninfo_all_blocks=1 00:08:34.034 --rc geninfo_unexecuted_blocks=1 00:08:34.034 00:08:34.034 ' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.034 --rc genhtml_branch_coverage=1 00:08:34.034 --rc genhtml_function_coverage=1 00:08:34.034 --rc genhtml_legend=1 00:08:34.034 --rc geninfo_all_blocks=1 00:08:34.034 --rc geninfo_unexecuted_blocks=1 00:08:34.034 00:08:34.034 ' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.034 19:46:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:42.179 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:42.179 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:42.179 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:42.179 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.179 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.179 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.179 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.179 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.179 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.179 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:08:42.180 00:08:42.180 --- 10.0.0.2 ping statistics --- 00:08:42.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.180 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:42.180 00:08:42.180 --- 10.0.0.1 ping statistics --- 00:08:42.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.180 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:42.180 only one NIC for nvmf test 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.180 rmmod nvme_tcp 00:08:42.180 rmmod nvme_fabrics 00:08:42.180 rmmod nvme_keyring 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.180 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.567 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.567 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:43.567 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:43.567 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.567 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.829 00:08:43.829 real 0m10.043s 00:08:43.829 user 0m2.185s 00:08:43.829 sys 0m5.816s 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 ************************************ 00:08:43.829 END TEST nvmf_target_multipath 00:08:43.829 ************************************ 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.829 ************************************ 00:08:43.829 START TEST nvmf_zcopy 00:08:43.829 ************************************ 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.829 * Looking for test storage... 00:08:43.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.829 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.091 --rc genhtml_branch_coverage=1 00:08:44.091 --rc genhtml_function_coverage=1 00:08:44.091 --rc genhtml_legend=1 00:08:44.091 --rc geninfo_all_blocks=1 00:08:44.091 --rc geninfo_unexecuted_blocks=1 00:08:44.091 00:08:44.091 ' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.091 --rc genhtml_branch_coverage=1 00:08:44.091 --rc genhtml_function_coverage=1 00:08:44.091 --rc genhtml_legend=1 00:08:44.091 --rc geninfo_all_blocks=1 00:08:44.091 --rc geninfo_unexecuted_blocks=1 00:08:44.091 00:08:44.091 ' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.091 --rc genhtml_branch_coverage=1 00:08:44.091 --rc genhtml_function_coverage=1 00:08:44.091 --rc genhtml_legend=1 00:08:44.091 --rc geninfo_all_blocks=1 00:08:44.091 --rc geninfo_unexecuted_blocks=1 00:08:44.091 00:08:44.091 ' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.091 --rc genhtml_branch_coverage=1 00:08:44.091 --rc genhtml_function_coverage=1 00:08:44.091 --rc genhtml_legend=1 00:08:44.091 --rc geninfo_all_blocks=1 00:08:44.091 --rc geninfo_unexecuted_blocks=1 00:08:44.091 00:08:44.091 ' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.091 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:44.092 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:52.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:52.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:52.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:52.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:08:52.391 00:08:52.391 --- 10.0.0.2 ping statistics --- 00:08:52.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.391 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:08:52.391 00:08:52.391 --- 10.0.0.1 ping statistics --- 00:08:52.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.391 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3493057 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3493057 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3493057 ']' 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.391 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.391 [2024-11-26 19:46:52.425813] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:52.391 [2024-11-26 19:46:52.425879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.391 [2024-11-26 19:46:52.524222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.391 [2024-11-26 19:46:52.573828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.391 [2024-11-26 19:46:52.573878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.391 [2024-11-26 19:46:52.573887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.391 [2024-11-26 19:46:52.573894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.391 [2024-11-26 19:46:52.573901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.392 [2024-11-26 19:46:52.574716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 [2024-11-26 19:46:53.291202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 [2024-11-26 19:46:53.315464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 malloc0 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.656 { 00:08:52.656 "params": { 00:08:52.656 "name": "Nvme$subsystem", 00:08:52.656 "trtype": "$TEST_TRANSPORT", 00:08:52.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.656 "adrfam": "ipv4", 00:08:52.656 "trsvcid": "$NVMF_PORT", 00:08:52.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.656 "hdgst": ${hdgst:-false}, 00:08:52.656 "ddgst": ${ddgst:-false} 00:08:52.656 }, 00:08:52.657 "method": "bdev_nvme_attach_controller" 00:08:52.657 } 00:08:52.657 EOF 00:08:52.657 )") 00:08:52.657 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:52.657 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:52.657 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:52.657 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.657 "params": { 00:08:52.657 "name": "Nvme1", 00:08:52.657 "trtype": "tcp", 00:08:52.657 "traddr": "10.0.0.2", 00:08:52.657 "adrfam": "ipv4", 00:08:52.657 "trsvcid": "4420", 00:08:52.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.657 "hdgst": false, 00:08:52.657 "ddgst": false 00:08:52.657 }, 00:08:52.657 "method": "bdev_nvme_attach_controller" 00:08:52.657 }' 00:08:52.657 [2024-11-26 19:46:53.417764] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:08:52.657 [2024-11-26 19:46:53.417831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493135 ] 00:08:52.917 [2024-11-26 19:46:53.510385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.917 [2024-11-26 19:46:53.563786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.179 Running I/O for 10 seconds... 00:08:55.060 6446.00 IOPS, 50.36 MiB/s [2024-11-26T18:46:57.261Z] 6485.00 IOPS, 50.66 MiB/s [2024-11-26T18:46:58.200Z] 6563.33 IOPS, 51.28 MiB/s [2024-11-26T18:46:59.141Z] 7332.00 IOPS, 57.28 MiB/s [2024-11-26T18:47:00.079Z] 7813.20 IOPS, 61.04 MiB/s [2024-11-26T18:47:01.018Z] 8135.50 IOPS, 63.56 MiB/s [2024-11-26T18:47:01.964Z] 8358.71 IOPS, 65.30 MiB/s [2024-11-26T18:47:02.904Z] 8526.88 IOPS, 66.62 MiB/s [2024-11-26T18:47:04.324Z] 8659.22 IOPS, 67.65 MiB/s [2024-11-26T18:47:04.324Z] 8767.70 IOPS, 68.50 MiB/s 00:09:03.503 Latency(us) 00:09:03.503 [2024-11-26T18:47:04.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.503 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.503 Verification LBA range: start 0x0 length 0x1000 00:09:03.503 Nvme1n1 : 10.01 8769.76 68.51 0.00 0.00 14549.44 1071.79 28617.39 00:09:03.503 [2024-11-26T18:47:04.324Z] =================================================================================================================== 00:09:03.503 [2024-11-26T18:47:04.324Z] Total : 8769.76 68.51 0.00 0.00 14549.44 1071.79 28617.39 00:09:03.503 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3497039 00:09:03.503 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:03.503 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.504 { 00:09:03.504 "params": { 00:09:03.504 "name": "Nvme$subsystem", 00:09:03.504 "trtype": "$TEST_TRANSPORT", 00:09:03.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.504 "adrfam": "ipv4", 00:09:03.504 "trsvcid": "$NVMF_PORT", 00:09:03.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.504 "hdgst": ${hdgst:-false}, 00:09:03.504 "ddgst": ${ddgst:-false} 00:09:03.504 }, 00:09:03.504 "method": "bdev_nvme_attach_controller" 00:09:03.504 } 00:09:03.504 EOF 00:09:03.504 )") 00:09:03.504 [2024-11-26 19:47:03.994605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:03.994632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.504 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.504 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.504 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.504 "params": { 00:09:03.504 "name": "Nvme1", 00:09:03.504 "trtype": "tcp", 00:09:03.504 "traddr": "10.0.0.2", 00:09:03.504 "adrfam": "ipv4", 00:09:03.504 "trsvcid": "4420", 00:09:03.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.504 "hdgst": false, 00:09:03.504 "ddgst": false 00:09:03.504 }, 00:09:03.504 "method": "bdev_nvme_attach_controller" 00:09:03.504 }' 00:09:03.504 [2024-11-26 19:47:04.006600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.006609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.018627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.018636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.030659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.030668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.039713] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:09:03.504 [2024-11-26 19:47:04.039761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497039 ] 00:09:03.504 [2024-11-26 19:47:04.042690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.042697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.054720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.054728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.066749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.066757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.078779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.078787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.090809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.090817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.102840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.102848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.114871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.114883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.120435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.504 [2024-11-26 19:47:04.126905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.126914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.138936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.138945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.150366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.504 [2024-11-26 19:47:04.150967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.150975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.163003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.163012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.175036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.175049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.187065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.187076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.199094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.199103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.211123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.211131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.223168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.223184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.235192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.235202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.247219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.247228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.259250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.259258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.271282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.271289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.283318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.283326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.295347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.295357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.307377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.307386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.504 [2024-11-26 19:47:04.319408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.504 [2024-11-26 19:47:04.319415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.331439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.331451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.343472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.343483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.355500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.355508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.367532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.367540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.379562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.379569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.391595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.391604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.403626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.403633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.415660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.415667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.427694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.427701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.439732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.439744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.451763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.451777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 Running I/O for 5 seconds... 00:09:03.765 [2024-11-26 19:47:04.467618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.467636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.480733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.480751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.493494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.493511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.506455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.506472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.519121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.519137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.532306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.532322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.545246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.545263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.558572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.558588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.765 [2024-11-26 19:47:04.571428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.765 [2024-11-26 19:47:04.571443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.584468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.584485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.597704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.597721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.610742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.610758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.623657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.623673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.637166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.637182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.650470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.650486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.664042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.664057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.677812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.677829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.691434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.691450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.704582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.704598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.718694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.718711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.731336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.731352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.743879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.743894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.757410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.757427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.770531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.770547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.784216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.784232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.797398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.797414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.810911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.810927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.824625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.824641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.026 [2024-11-26 19:47:04.838015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.026 [2024-11-26 19:47:04.838030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.850766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.850784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.863664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.863681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.876916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.876932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.890458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.890474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.904142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.904162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.917199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.917215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.929926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.929941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.943290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.943305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.957079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.957095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.969960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.969977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.983239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.983255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:04.997210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:04.997226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.010779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.010796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.024443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.024459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.037777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.037793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.051394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.051410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.064293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.064309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.077085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.077102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.286 [2024-11-26 19:47:05.090293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.286 [2024-11-26 19:47:05.090309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.103691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.546 [2024-11-26 19:47:05.103708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.116873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.546 [2024-11-26 19:47:05.116890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.130041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.546 [2024-11-26 19:47:05.130056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.143962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.546 [2024-11-26 19:47:05.143978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.157574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.546 [2024-11-26 19:47:05.157590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.546 [2024-11-26 19:47:05.170932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.170947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.183994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.184011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.197567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.197584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.211085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.211101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.224936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.224953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.237340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.237355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.251223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.251238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.264118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.264133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.277238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.277253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.290915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.290931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.303887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.303902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.317209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.317225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.330436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.330451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.344287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.344303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.547 [2024-11-26 19:47:05.357773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.547 [2024-11-26 19:47:05.357789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.371466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.371482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.385085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.385101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.398252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.398269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.411660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.411676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.425210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.425225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.438106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.438124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.450946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.450962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 18252.00 IOPS, 142.59 MiB/s [2024-11-26T18:47:05.628Z] [2024-11-26 19:47:05.463884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.463901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.477376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.477393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.490461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.490478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.807 [2024-11-26 19:47:05.504689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.807 [2024-11-26 19:47:05.504704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.517638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.517654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.530468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.530485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.543838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.543855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.557297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.557313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.570593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.570614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.584446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.584463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.598270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.598286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.611825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.611841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.808 [2024-11-26 19:47:05.624585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.808 [2024-11-26 19:47:05.624601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.637477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.637493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.650587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.650604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.663600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.663616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.677281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.677297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.690442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.690458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.703898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.703914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.717357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.717374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.731097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.731113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.744893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.744909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.068 [2024-11-26 19:47:05.757919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.068 [2024-11-26 19:47:05.757935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.771872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.771887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.784935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.784951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.798216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.798232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.811861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.811876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.825616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.825637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.838327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.838343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.851277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.851293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.864581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.864597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.069 [2024-11-26 19:47:05.877447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.069 [2024-11-26 19:47:05.877462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.890828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.890845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.904585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.904601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.917490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.917506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.930461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.930477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.943457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.943474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.956504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.956520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.969381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.969397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.981963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.981979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:05.995166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:05.995183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.008884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.008900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.022264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.022280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.035948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.035964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.049366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.049382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.063052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.063068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.076277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.076296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.089914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.089930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.102984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.329 [2024-11-26 19:47:06.103000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.329 [2024-11-26 19:47:06.116027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.330 [2024-11-26 19:47:06.116043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.330 [2024-11-26 19:47:06.128890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.330 [2024-11-26 19:47:06.128907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.330 [2024-11-26 19:47:06.141887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.330 [2024-11-26 19:47:06.141903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.154929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.154945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.168243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.168259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.182177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.182193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.194751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.194767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.207754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.207770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.220866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.220882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.234070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.234085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.246929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.246945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.260007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.260022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.273262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.273277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.286330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.286346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.299338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.299355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.312797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.312813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.325748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.325764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.339035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.339051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.351878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.351895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.365515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.365531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.378673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.378689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.392125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.392141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.591 [2024-11-26 19:47:06.405717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.591 [2024-11-26 19:47:06.405734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.419000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.419017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.432015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.432032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.445791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.445808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.458614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.458630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 18346.50 IOPS, 143.33 MiB/s [2024-11-26T18:47:06.673Z] [2024-11-26 19:47:06.471471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.471487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.485109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.485125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.498673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.498689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.512166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.512182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.525297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.525312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.537993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.538008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.550898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.550914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.563737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.563753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.576639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.576657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.589076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.589093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.602164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.602180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.614877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.614892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.628106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.628122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.641906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.641922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.654727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.654744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.852 [2024-11-26 19:47:06.667465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.852 [2024-11-26 19:47:06.667481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.680294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.680310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.693872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.693887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.706695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.706710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.719966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.719982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.732633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.732648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.745850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.745866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.758748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.758763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.772867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.772883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.785974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.785989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.799216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.799231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.812993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.813014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.826279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.826295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.839556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.839571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.852144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.852164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.865391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.865406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.878297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.878312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.892197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.892212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.904724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.904739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.917578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.917594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.118 [2024-11-26 19:47:06.931230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.118 [2024-11-26 19:47:06.931244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:06.945069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:06.945086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:06.957894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:06.957909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:06.970752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:06.970768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:06.984425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:06.984441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:06.997394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:06.997411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.010722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.010739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.023964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.023980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.037342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.037358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.051191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.051206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.063773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.063793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.076992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.077008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.090148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.090171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.103876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.103892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.116769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.116786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.129905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.129920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.143013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.143028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.156800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.156815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.169667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.169682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.182838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.182853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.380 [2024-11-26 19:47:07.195632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.380 [2024-11-26 19:47:07.195648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.208632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.208648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.221506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.221522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.235313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.235331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.248970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.248986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.262694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.262709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.276502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.276519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.289279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.289296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.302250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.302265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.315155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.315180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.327703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.327719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.341021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.341038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.353804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.353820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.367305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.367321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.380194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.380210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.393333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.393350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.406180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.406196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.419469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.419485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.432218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.432234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.445274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.445291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.642 [2024-11-26 19:47:07.458405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.642 [2024-11-26 19:47:07.458422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.903 18346.67 IOPS, 143.33 MiB/s [2024-11-26T18:47:07.724Z] [2024-11-26 19:47:07.470895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.903 [2024-11-26 19:47:07.470910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.903 [2024-11-26 19:47:07.484067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.484084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.497319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.497335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.510117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.510133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.522934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.522950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.535679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.535695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.548700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.548716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.561485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.561501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.574316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.574331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.587186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.587202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.600302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.600318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.613559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.613574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.626336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.626351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.639953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.639969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.653706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.653722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.666492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.666507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.679865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.679881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.693318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.693334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.706031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.706047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.904 [2024-11-26 19:47:07.719223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.904 [2024-11-26 19:47:07.719239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.731831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.731847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.745183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.745198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.758837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.758853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.771950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.771965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.784667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.784683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.797409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.797426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.810603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.810619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.823830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.823847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.836800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.165 [2024-11-26 19:47:07.836816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.165 [2024-11-26 19:47:07.849307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.849323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.862944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.862959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.875515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.875531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.888718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.888734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.901411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.901427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.914341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.914357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.928009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.928025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.940921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.940936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.954675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.954691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.967654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.967670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.166 [2024-11-26 19:47:07.980534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.166 [2024-11-26 19:47:07.980549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:07.993675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:07.993691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.007196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.007212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.020822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.020839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.034232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.034249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.047977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.047993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.060794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.060810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.074398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.074415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.087538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.087554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.100196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.100211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.113636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.113651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.126688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.126704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.139384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.139400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.152228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.152244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.164892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.164908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.177417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.177432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.190727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.190743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.203118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.203134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.216062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.216078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.228964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.228980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.427 [2024-11-26 19:47:08.242073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.427 [2024-11-26 19:47:08.242089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.256191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.256208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.269172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.269187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.282415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.282432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.295973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.295989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.308857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.308873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.321615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.321631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.335813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.335829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.349461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.349477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.363050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.363067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.376027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.376043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.389404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.389420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.402555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.402570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.415699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.415715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.428550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.428565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.442097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.442114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.455319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.455334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 18367.75 IOPS, 143.50 MiB/s [2024-11-26T18:47:08.509Z] [2024-11-26 19:47:08.468408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.468423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.481576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.481592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.688 [2024-11-26 19:47:08.494495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.688 [2024-11-26 19:47:08.494511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.507730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.507746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.520892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.520908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.533939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.533955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.547224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.547244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.560612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.560628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.573704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.573720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.586921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.586937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.949 [2024-11-26 19:47:08.600109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.949 [2024-11-26 19:47:08.600125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.613603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.613618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.626570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.626585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.639986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.640002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.652485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.652500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.665977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.665992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.679063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.679079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.692110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.692126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.704705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.704720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.717424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.717439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.730454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.730469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.744218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.744233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.950 [2024-11-26 19:47:08.756833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.950 [2024-11-26 19:47:08.756849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.769752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.769769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.782670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.782685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.796534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.796556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.809742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.809758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.823273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.823289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.836712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.836728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.850161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.850177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.862836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.862852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.875917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.875932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.888612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.888628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.902297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.902314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.914937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.914953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.928056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.928073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.940927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.940942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.953726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.953742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.966691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.966707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.979804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.979820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:08.992787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:08.992803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:09.005747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:09.005763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:09.018803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:09.018820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:09.031847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:09.031864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:09.045104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:09.045124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.246 [2024-11-26 19:47:09.057886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.246 [2024-11-26 19:47:09.057902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.071260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.071276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.084679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.084695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.098311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.098327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.111230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.111246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.124161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.124177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.137408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.137424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.150011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.150027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.162921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.162936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.176767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.176782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.189762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.189778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.202607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.202623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.215407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.215423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.228483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.228499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.241925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.241940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.255361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.255377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.268480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.268496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.281524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.281540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.294687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.294702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.307039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.307055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.510 [2024-11-26 19:47:09.320881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.510 [2024-11-26 19:47:09.320898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.333787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.333803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.347425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.347441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.360742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.360758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.374162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.374179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.387226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.387242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.400238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.400255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.413434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.413449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.426637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.426653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.439787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.439804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.452475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.452491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.465471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.465487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 18384.80 IOPS, 143.63 MiB/s 00:09:08.771 Latency(us) 00:09:08.771 [2024-11-26T18:47:09.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.771 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:08.771 Nvme1n1 : 5.01 18385.85 143.64 0.00 0.00 6955.58 2976.43 18240.85 00:09:08.771 [2024-11-26T18:47:09.592Z] =================================================================================================================== 00:09:08.771 [2024-11-26T18:47:09.592Z] Total : 18385.85 143.64 0.00 0.00 6955.58 2976.43 18240.85 00:09:08.771 [2024-11-26 19:47:09.475272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.475286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.487299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.487314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.771 [2024-11-26 19:47:09.499334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.771 [2024-11-26 19:47:09.499347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.511363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.511377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.523390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.523402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.535418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.535428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.547450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.547458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.559483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.559494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-26 19:47:09.571511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-26 19:47:09.571519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3497039) - No such process 00:09:08.772 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3497039 00:09:08.772 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.772 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.772 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.772 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.033 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.034 delay0 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.034 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:09.034 [2024-11-26 19:47:09.783326] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:17.173 Initializing NVMe Controllers 00:09:17.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:17.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:17.173 Initialization complete. Launching workers. 00:09:17.173 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 32633 00:09:17.173 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32731, failed to submit 139 00:09:17.173 success 32660, unsuccessful 71, failed 0 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.173 rmmod nvme_tcp 00:09:17.173 rmmod nvme_fabrics 00:09:17.173 rmmod nvme_keyring 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3493057 ']' 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3493057 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3493057 ']' 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3493057 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.173 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493057 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493057' 00:09:17.173 killing process with pid 3493057 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3493057 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3493057 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.173 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.561 00:09:18.561 real 0m34.683s 00:09:18.561 user 0m45.726s 00:09:18.561 sys 0m11.879s 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.561 ************************************ 00:09:18.561 END TEST nvmf_zcopy 00:09:18.561 ************************************ 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.561 ************************************ 00:09:18.561 START TEST nvmf_nmic 00:09:18.561 ************************************ 00:09:18.561 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:18.561 * Looking for test storage... 00:09:18.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.825 --rc genhtml_branch_coverage=1 00:09:18.825 --rc genhtml_function_coverage=1 00:09:18.825 --rc genhtml_legend=1 00:09:18.825 --rc geninfo_all_blocks=1 00:09:18.825 --rc geninfo_unexecuted_blocks=1 00:09:18.825 00:09:18.825 ' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.825 --rc genhtml_branch_coverage=1 00:09:18.825 --rc genhtml_function_coverage=1 00:09:18.825 --rc genhtml_legend=1 00:09:18.825 --rc geninfo_all_blocks=1 00:09:18.825 --rc geninfo_unexecuted_blocks=1 00:09:18.825 00:09:18.825 ' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.825 --rc genhtml_branch_coverage=1 00:09:18.825 --rc genhtml_function_coverage=1 00:09:18.825 --rc genhtml_legend=1 00:09:18.825 --rc geninfo_all_blocks=1 00:09:18.825 --rc geninfo_unexecuted_blocks=1 00:09:18.825 00:09:18.825 ' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.825 --rc genhtml_branch_coverage=1 00:09:18.825 --rc genhtml_function_coverage=1 00:09:18.825 --rc genhtml_legend=1 00:09:18.825 --rc geninfo_all_blocks=1 00:09:18.825 --rc geninfo_unexecuted_blocks=1 00:09:18.825 00:09:18.825 ' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.825 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.826 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:26.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:26.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:26.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.973 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:26.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:09:26.974 00:09:26.974 --- 10.0.0.2 ping statistics --- 00:09:26.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.974 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:09:26.974 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:09:26.974 00:09:26.974 --- 10.0.0.1 ping statistics --- 00:09:26.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.974 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3505689 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3505689 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3505689 ']' 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.974 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.974 [2024-11-26 19:47:27.118561] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:09:26.974 [2024-11-26 19:47:27.118627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.974 [2024-11-26 19:47:27.217370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.974 [2024-11-26 19:47:27.274435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.974 [2024-11-26 19:47:27.274492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.974 [2024-11-26 19:47:27.274501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.974 [2024-11-26 19:47:27.274508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.974 [2024-11-26 19:47:27.274515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.974 [2024-11-26 19:47:27.276573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.974 [2024-11-26 19:47:27.276738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.974 [2024-11-26 19:47:27.276906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.974 [2024-11-26 19:47:27.276906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.236 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.236 [2024-11-26 19:47:27.994998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.236 Malloc0 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.236 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.554 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.554 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.554 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.554 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-26 19:47:28.071047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:27.555 test case1: single bdev can't be used in multiple subsystems 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-26 19:47:28.106800] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:27.555 [2024-11-26 19:47:28.106829] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:27.555 [2024-11-26 19:47:28.106840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.555 request: 00:09:27.555 { 00:09:27.555 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:27.555 "namespace": { 00:09:27.555 "bdev_name": "Malloc0", 00:09:27.555 "no_auto_visible": false, 00:09:27.555 "hide_metadata": false 00:09:27.555 }, 00:09:27.555 "method": "nvmf_subsystem_add_ns", 00:09:27.555 "req_id": 1 00:09:27.555 } 00:09:27.555 Got JSON-RPC error response 00:09:27.555 response: 00:09:27.555 { 00:09:27.555 "code": -32602, 00:09:27.555 "message": "Invalid parameters" 00:09:27.555 } 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:27.555 Adding namespace failed - expected result. 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:27.555 test case2: host connect to nvmf target in multiple paths 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-26 19:47:28.119045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.555 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.985 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:30.900 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.900 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:30.900 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.900 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:30.900 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:32.817 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.817 [global] 00:09:32.817 thread=1 00:09:32.817 invalidate=1 00:09:32.817 rw=write 00:09:32.817 time_based=1 00:09:32.817 runtime=1 00:09:32.817 ioengine=libaio 00:09:32.817 direct=1 00:09:32.817 bs=4096 00:09:32.817 iodepth=1 00:09:32.817 norandommap=0 00:09:32.817 numjobs=1 00:09:32.817 00:09:32.817 verify_dump=1 00:09:32.817 verify_backlog=512 00:09:32.817 verify_state_save=0 00:09:32.817 do_verify=1 00:09:32.817 verify=crc32c-intel 00:09:32.817 [job0] 00:09:32.817 filename=/dev/nvme0n1 00:09:32.817 Could not set queue depth (nvme0n1) 00:09:32.817 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.817 fio-3.35 00:09:32.817 Starting 1 thread 00:09:34.200 00:09:34.200 job0: (groupid=0, jobs=1): err= 0: pid=3507431: Tue Nov 26 19:47:34 2024 00:09:34.200 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:09:34.200 slat (nsec): min=26717, max=27755, avg=27021.82, stdev=232.90 00:09:34.200 clat (usec): min=40963, max=42011, avg=41788.97, stdev=366.94 00:09:34.200 lat (usec): min=40991, max=42038, avg=41815.99, stdev=366.80 00:09:34.200 clat percentiles (usec): 00:09:34.200 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:34.200 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:34.200 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:34.200 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.200 | 99.99th=[42206] 00:09:34.200 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:34.200 slat (usec): min=9, max=26325, avg=76.70, stdev=1162.40 00:09:34.200 clat (usec): min=238, max=816, avg=540.05, stdev=99.79 00:09:34.200 lat (usec): min=248, max=26954, avg=616.75, stdev=1170.97 00:09:34.200 clat percentiles (usec): 00:09:34.200 | 1.00th=[ 330], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 453], 00:09:34.200 | 30.00th=[ 490], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 562], 00:09:34.200 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 725], 00:09:34.200 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:09:34.200 | 99.99th=[ 816] 00:09:34.200 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.200 lat (usec) : 250=0.38%, 500=34.03%, 750=60.87%, 1000=1.51% 00:09:34.200 lat (msec) : 50=3.21% 00:09:34.200 cpu : usr=0.39%, sys=2.04%, ctx=532, majf=0, minf=1 00:09:34.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.200 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.200 00:09:34.200 Run status group 0 (all jobs): 00:09:34.200 READ: bw=66.0KiB/s (67.6kB/s), 66.0KiB/s-66.0KiB/s (67.6kB/s-67.6kB/s), io=68.0KiB (69.6kB), run=1030-1030msec 00:09:34.200 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:09:34.200 00:09:34.200 Disk stats (read/write): 00:09:34.200 nvme0n1: ios=38/512, merge=0/0, ticks=1508/243, in_queue=1751, util=98.90% 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.200 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:34.200 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.200 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:34.200 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.200 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.200 rmmod nvme_tcp 00:09:34.461 rmmod nvme_fabrics 00:09:34.461 rmmod nvme_keyring 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3505689 ']' 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3505689 ']' 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505689' 00:09:34.461 killing process with pid 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3505689 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.461 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.721 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.721 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.721 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.721 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.722 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.633 00:09:36.633 real 0m18.077s 00:09:36.633 user 0m49.896s 00:09:36.633 sys 0m6.625s 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.633 ************************************ 00:09:36.633 END TEST nvmf_nmic 00:09:36.633 ************************************ 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.633 ************************************ 00:09:36.633 START TEST nvmf_fio_target 00:09:36.633 ************************************ 00:09:36.633 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:36.894 * Looking for test storage... 00:09:36.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.894 --rc genhtml_branch_coverage=1 00:09:36.894 --rc genhtml_function_coverage=1 00:09:36.894 --rc genhtml_legend=1 00:09:36.894 --rc geninfo_all_blocks=1 00:09:36.894 --rc geninfo_unexecuted_blocks=1 00:09:36.894 00:09:36.894 ' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.894 --rc genhtml_branch_coverage=1 00:09:36.894 --rc genhtml_function_coverage=1 00:09:36.894 --rc genhtml_legend=1 00:09:36.894 --rc geninfo_all_blocks=1 00:09:36.894 --rc geninfo_unexecuted_blocks=1 00:09:36.894 00:09:36.894 ' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.894 --rc genhtml_branch_coverage=1 00:09:36.894 --rc genhtml_function_coverage=1 00:09:36.894 --rc genhtml_legend=1 00:09:36.894 --rc geninfo_all_blocks=1 00:09:36.894 --rc geninfo_unexecuted_blocks=1 00:09:36.894 00:09:36.894 ' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.894 --rc genhtml_branch_coverage=1 00:09:36.894 --rc genhtml_function_coverage=1 00:09:36.894 --rc genhtml_legend=1 00:09:36.894 --rc geninfo_all_blocks=1 00:09:36.894 --rc geninfo_unexecuted_blocks=1 00:09:36.894 00:09:36.894 ' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.894 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.895 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.028 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.029 19:47:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:09:45.029 00:09:45.029 --- 10.0.0.2 ping statistics --- 00:09:45.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.029 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:09:45.029 00:09:45.029 --- 10.0.0.1 ping statistics --- 00:09:45.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.029 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3513204 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3513204 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3513204 ']' 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.029 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.029 [2024-11-26 19:47:45.280258] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:09:45.029 [2024-11-26 19:47:45.280328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.029 [2024-11-26 19:47:45.379723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.029 [2024-11-26 19:47:45.432516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.029 [2024-11-26 19:47:45.432575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.029 [2024-11-26 19:47:45.432584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.029 [2024-11-26 19:47:45.432592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.029 [2024-11-26 19:47:45.432599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.029 [2024-11-26 19:47:45.434693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.029 [2024-11-26 19:47:45.434855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.029 [2024-11-26 19:47:45.435017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.029 [2024-11-26 19:47:45.435018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.601 [2024-11-26 19:47:46.325346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.601 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.862 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.862 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.123 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:46.123 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.383 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:46.383 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.644 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:46.644 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.644 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.905 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.905 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.165 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:47.165 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.425 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:47.425 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:47.425 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.686 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.686 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.947 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.947 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.209 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.209 [2024-11-26 19:47:48.923712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.209 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:48.469 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:48.730 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:50.115 19:47:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:52.682 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:52.682 [global] 00:09:52.682 thread=1 00:09:52.682 invalidate=1 00:09:52.682 rw=write 00:09:52.682 time_based=1 00:09:52.682 runtime=1 00:09:52.682 ioengine=libaio 00:09:52.682 direct=1 00:09:52.682 bs=4096 00:09:52.682 iodepth=1 00:09:52.682 norandommap=0 00:09:52.682 numjobs=1 00:09:52.682 00:09:52.682 verify_dump=1 00:09:52.682 verify_backlog=512 00:09:52.682 verify_state_save=0 00:09:52.682 do_verify=1 00:09:52.682 verify=crc32c-intel 00:09:52.682 [job0] 00:09:52.682 filename=/dev/nvme0n1 00:09:52.682 [job1] 00:09:52.682 filename=/dev/nvme0n2 00:09:52.682 [job2] 00:09:52.682 filename=/dev/nvme0n3 00:09:52.682 [job3] 00:09:52.682 filename=/dev/nvme0n4 00:09:52.682 Could not set queue depth (nvme0n1) 00:09:52.682 Could not set queue depth (nvme0n2) 00:09:52.682 Could not set queue depth (nvme0n3) 00:09:52.682 Could not set queue depth (nvme0n4) 00:09:52.682 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.682 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.682 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.682 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.682 fio-3.35 00:09:52.682 Starting 4 threads 00:09:54.065 00:09:54.065 job0: (groupid=0, jobs=1): err= 0: pid=3515312: Tue Nov 26 19:47:54 2024 00:09:54.065 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:54.065 slat (nsec): min=7288, max=61217, avg=25777.87, stdev=3058.70 00:09:54.065 clat (usec): min=481, max=1342, avg=975.90, stdev=97.29 00:09:54.065 lat (usec): min=508, max=1368, avg=1001.68, stdev=97.61 00:09:54.065 clat percentiles (usec): 00:09:54.065 | 1.00th=[ 676], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 922], 00:09:54.065 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:54.065 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:54.065 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1336], 99.95th=[ 1336], 00:09:54.065 | 99.99th=[ 1336] 00:09:54.065 write: IOPS=777, BW=3109KiB/s (3184kB/s)(3112KiB/1001msec); 0 zone resets 00:09:54.065 slat (nsec): min=9994, max=65734, avg=30260.87, stdev=9924.32 00:09:54.065 clat (usec): min=196, max=970, avg=583.75, stdev=130.56 00:09:54.065 lat (usec): min=206, max=1004, avg=614.01, stdev=133.66 00:09:54.065 clat percentiles (usec): 00:09:54.065 | 1.00th=[ 265], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 465], 00:09:54.065 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:09:54.065 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:09:54.065 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 971], 99.95th=[ 971], 00:09:54.065 | 99.99th=[ 971] 00:09:54.065 bw ( KiB/s): min= 4096, max= 4096, per=37.58%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.065 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.065 lat (usec) : 250=0.16%, 500=16.28%, 750=39.22%, 1000=25.35% 00:09:54.065 lat (msec) : 2=18.99% 00:09:54.065 cpu : usr=1.70%, sys=4.00%, ctx=1294, majf=0, minf=1 00:09:54.065 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.065 issued rwts: total=512,778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.065 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.065 job1: (groupid=0, jobs=1): err= 0: pid=3515313: Tue Nov 26 19:47:54 2024 00:09:54.065 read: IOPS=500, BW=2002KiB/s (2050kB/s)(2064KiB/1031msec) 00:09:54.065 slat (nsec): min=7286, max=46028, avg=25784.09, stdev=4768.95 00:09:54.065 clat (usec): min=423, max=41415, avg=858.31, stdev=1795.64 00:09:54.065 lat (usec): min=449, max=41442, avg=884.09, stdev=1795.72 00:09:54.065 clat percentiles (usec): 00:09:54.065 | 1.00th=[ 486], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 603], 00:09:54.065 | 30.00th=[ 685], 40.00th=[ 742], 50.00th=[ 799], 60.00th=[ 857], 00:09:54.065 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:09:54.065 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[41157], 99.95th=[41157], 00:09:54.065 | 99.99th=[41157] 00:09:54.065 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:09:54.065 slat (nsec): min=9804, max=81344, avg=29854.22, stdev=11075.57 00:09:54.065 clat (usec): min=114, max=958, avg=519.63, stdev=152.03 00:09:54.065 lat (usec): min=125, max=970, avg=549.49, stdev=157.59 00:09:54.065 clat percentiles (usec): 00:09:54.065 | 1.00th=[ 204], 5.00th=[ 265], 10.00th=[ 302], 20.00th=[ 383], 00:09:54.065 | 30.00th=[ 437], 40.00th=[ 486], 50.00th=[ 519], 60.00th=[ 562], 00:09:54.065 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 775], 00:09:54.065 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 889], 99.95th=[ 963], 00:09:54.065 | 99.99th=[ 963] 00:09:54.065 bw ( KiB/s): min= 4096, max= 4096, per=37.58%, avg=4096.00, stdev= 0.00, samples=2 00:09:54.065 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:54.065 lat (usec) : 250=1.69%, 500=28.38%, 750=46.30%, 1000=22.47% 00:09:54.066 lat (msec) : 2=1.10%, 50=0.06% 00:09:54.066 cpu : usr=2.43%, sys=3.98%, ctx=1542, majf=0, minf=1 00:09:54.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.066 job2: (groupid=0, jobs=1): err= 0: pid=3515315: Tue Nov 26 19:47:54 2024 00:09:54.066 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:09:54.066 slat (nsec): min=25385, max=26222, avg=25635.22, stdev=232.82 00:09:54.066 clat (usec): min=1002, max=42056, avg=39280.18, stdev=9562.49 00:09:54.066 lat (usec): min=1027, max=42081, avg=39305.82, stdev=9562.51 00:09:54.066 clat percentiles (usec): 00:09:54.066 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[40633], 20.00th=[41157], 00:09:54.066 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:54.066 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:54.066 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.066 | 99.99th=[42206] 00:09:54.066 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:09:54.066 slat (nsec): min=9843, max=53814, avg=30105.02, stdev=8899.94 00:09:54.066 clat (usec): min=201, max=977, avg=599.32, stdev=145.21 00:09:54.066 lat (usec): min=218, max=996, avg=629.43, stdev=148.31 00:09:54.066 clat percentiles (usec): 00:09:54.066 | 1.00th=[ 273], 5.00th=[ 355], 10.00th=[ 408], 20.00th=[ 469], 00:09:54.066 | 30.00th=[ 515], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:09:54.066 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 775], 95.00th=[ 857], 00:09:54.066 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:09:54.066 | 99.99th=[ 979] 00:09:54.066 bw ( KiB/s): min= 4096, max= 4096, per=37.58%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.066 lat (usec) : 250=0.94%, 500=23.40%, 750=60.19%, 1000=12.08% 00:09:54.066 lat (msec) : 2=0.19%, 50=3.21% 00:09:54.066 cpu : usr=0.68%, sys=1.45%, ctx=531, majf=0, minf=2 00:09:54.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.066 job3: (groupid=0, jobs=1): err= 0: pid=3515316: Tue Nov 26 19:47:54 2024 00:09:54.066 read: IOPS=98, BW=393KiB/s (403kB/s)(408KiB/1037msec) 00:09:54.066 slat (nsec): min=7733, max=48253, avg=27187.26, stdev=3992.39 00:09:54.066 clat (usec): min=460, max=42087, avg=7115.56, stdev=14646.91 00:09:54.066 lat (usec): min=488, max=42115, avg=7142.75, stdev=14646.84 00:09:54.066 clat percentiles (usec): 00:09:54.066 | 1.00th=[ 515], 5.00th=[ 594], 10.00th=[ 668], 20.00th=[ 750], 00:09:54.066 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 914], 60.00th=[ 963], 00:09:54.066 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[41681], 95.00th=[41681], 00:09:54.066 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:54.066 | 99.99th=[42206] 00:09:54.066 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:54.066 slat (usec): min=10, max=33215, avg=98.14, stdev=1466.47 00:09:54.066 clat (usec): min=170, max=804, avg=493.78, stdev=121.23 00:09:54.066 lat (usec): min=181, max=33808, avg=591.92, stdev=1476.11 00:09:54.066 clat percentiles (usec): 00:09:54.066 | 1.00th=[ 190], 5.00th=[ 277], 10.00th=[ 326], 20.00th=[ 388], 00:09:54.066 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[ 537], 00:09:54.066 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 676], 00:09:54.066 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 807], 99.95th=[ 807], 00:09:54.066 | 99.99th=[ 807] 00:09:54.066 bw ( KiB/s): min= 4096, max= 4096, per=37.58%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.066 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.066 lat (usec) : 250=1.47%, 500=39.25%, 750=45.44%, 1000=8.47% 00:09:54.066 lat (msec) : 2=2.77%, 50=2.61% 00:09:54.066 cpu : usr=0.87%, sys=1.83%, ctx=616, majf=0, minf=1 00:09:54.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.066 issued rwts: total=102,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.066 00:09:54.066 Run status group 0 (all jobs): 00:09:54.066 READ: bw=4428KiB/s (4534kB/s), 69.7KiB/s-2046KiB/s (71.4kB/s-2095kB/s), io=4592KiB (4702kB), run=1001-1037msec 00:09:54.066 WRITE: bw=10.6MiB/s (11.2MB/s), 1975KiB/s-3973KiB/s (2022kB/s-4068kB/s), io=11.0MiB (11.6MB), run=1001-1037msec 00:09:54.066 00:09:54.066 Disk stats (read/write): 00:09:54.066 nvme0n1: ios=460/512, merge=0/0, ticks=1274/292, in_queue=1566, util=86.37% 00:09:54.066 nvme0n2: ios=568/1024, merge=0/0, ticks=1072/503, in_queue=1575, util=92.07% 00:09:54.066 nvme0n3: ios=73/512, merge=0/0, ticks=748/291, in_queue=1039, util=92.92% 00:09:54.066 nvme0n4: ios=129/512, merge=0/0, ticks=1817/234, in_queue=2051, util=99.11% 00:09:54.066 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:54.066 [global] 00:09:54.066 thread=1 00:09:54.066 invalidate=1 00:09:54.066 rw=randwrite 00:09:54.066 time_based=1 00:09:54.066 runtime=1 00:09:54.066 ioengine=libaio 00:09:54.066 direct=1 00:09:54.066 bs=4096 00:09:54.066 iodepth=1 00:09:54.066 norandommap=0 00:09:54.066 numjobs=1 00:09:54.066 00:09:54.066 verify_dump=1 00:09:54.066 verify_backlog=512 00:09:54.066 verify_state_save=0 00:09:54.066 do_verify=1 00:09:54.066 verify=crc32c-intel 00:09:54.066 [job0] 00:09:54.066 filename=/dev/nvme0n1 00:09:54.066 [job1] 00:09:54.066 filename=/dev/nvme0n2 00:09:54.066 [job2] 00:09:54.066 filename=/dev/nvme0n3 00:09:54.066 [job3] 00:09:54.066 filename=/dev/nvme0n4 00:09:54.066 Could not set queue depth (nvme0n1) 00:09:54.066 Could not set queue depth (nvme0n2) 00:09:54.066 Could not set queue depth (nvme0n3) 00:09:54.066 Could not set queue depth (nvme0n4) 00:09:54.328 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.328 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.328 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.328 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.328 fio-3.35 00:09:54.328 Starting 4 threads 00:09:55.711 00:09:55.711 job0: (groupid=0, jobs=1): err= 0: pid=3515875: Tue Nov 26 19:47:56 2024 00:09:55.711 read: IOPS=18, BW=73.2KiB/s (75.0kB/s)(76.0KiB/1038msec) 00:09:55.711 slat (nsec): min=26433, max=27433, avg=26787.79, stdev=231.20 00:09:55.711 clat (usec): min=40840, max=41142, avg=40950.22, stdev=69.91 00:09:55.711 lat (usec): min=40866, max=41169, avg=40977.01, stdev=69.97 00:09:55.711 clat percentiles (usec): 00:09:55.711 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:55.711 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:55.711 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:55.711 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:55.711 | 99.99th=[41157] 00:09:55.711 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:55.711 slat (nsec): min=9741, max=55529, avg=29601.12, stdev=9968.28 00:09:55.711 clat (usec): min=228, max=740, avg=468.13, stdev=88.61 00:09:55.711 lat (usec): min=262, max=774, avg=497.73, stdev=92.27 00:09:55.711 clat percentiles (usec): 00:09:55.711 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 343], 20.00th=[ 396], 00:09:55.711 | 30.00th=[ 429], 40.00th=[ 453], 50.00th=[ 474], 60.00th=[ 490], 00:09:55.711 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 611], 00:09:55.711 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[ 742], 99.95th=[ 742], 00:09:55.711 | 99.99th=[ 742] 00:09:55.711 bw ( KiB/s): min= 4096, max= 4096, per=35.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.711 lat (usec) : 250=0.38%, 500=62.52%, 750=33.52% 00:09:55.711 lat (msec) : 50=3.58% 00:09:55.711 cpu : usr=0.68%, sys=1.45%, ctx=533, majf=0, minf=1 00:09:55.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.712 job1: (groupid=0, jobs=1): err= 0: pid=3515876: Tue Nov 26 19:47:56 2024 00:09:55.712 read: IOPS=28, BW=115KiB/s (118kB/s)(116KiB/1008msec) 00:09:55.712 slat (nsec): min=7855, max=30757, avg=25188.97, stdev=5972.54 00:09:55.712 clat (usec): min=553, max=41563, avg=25777.42, stdev=19967.31 00:09:55.712 lat (usec): min=566, max=41572, avg=25802.61, stdev=19968.81 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 553], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 660], 00:09:55.712 | 30.00th=[ 775], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:09:55.712 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:55.712 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:55.712 | 99.99th=[41681] 00:09:55.712 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:55.712 slat (nsec): min=9975, max=56660, avg=32079.13, stdev=8646.11 00:09:55.712 clat (usec): min=169, max=747, avg=464.98, stdev=112.27 00:09:55.712 lat (usec): min=203, max=791, avg=497.06, stdev=114.71 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 326], 00:09:55.712 | 30.00th=[ 392], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 529], 00:09:55.712 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 594], 00:09:55.712 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 750], 99.95th=[ 750], 00:09:55.712 | 99.99th=[ 750] 00:09:55.712 bw ( KiB/s): min= 4096, max= 4096, per=35.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.712 lat (usec) : 250=1.29%, 500=41.04%, 750=53.79%, 1000=0.55% 00:09:55.712 lat (msec) : 50=3.33% 00:09:55.712 cpu : usr=0.50%, sys=1.99%, ctx=542, majf=0, minf=1 00:09:55.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.712 job2: (groupid=0, jobs=1): err= 0: pid=3515877: Tue Nov 26 19:47:56 2024 00:09:55.712 read: IOPS=597, BW=2390KiB/s (2447kB/s)(2392KiB/1001msec) 00:09:55.712 slat (nsec): min=7452, max=63128, avg=24618.06, stdev=8252.25 00:09:55.712 clat (usec): min=526, max=41088, avg=836.30, stdev=1650.11 00:09:55.712 lat (usec): min=537, max=41118, avg=860.91, stdev=1650.45 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 545], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 709], 00:09:55.712 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 799], 00:09:55.712 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:09:55.712 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[41157], 99.95th=[41157], 00:09:55.712 | 99.99th=[41157] 00:09:55.712 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:55.712 slat (nsec): min=10004, max=63065, avg=28799.58, stdev=10589.86 00:09:55.712 clat (usec): min=163, max=915, avg=433.48, stdev=91.06 00:09:55.712 lat (usec): min=173, max=949, avg=462.28, stdev=95.74 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 255], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 351], 00:09:55.712 | 30.00th=[ 404], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 453], 00:09:55.712 | 70.00th=[ 465], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 553], 00:09:55.712 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 914], 00:09:55.712 | 99.99th=[ 914] 00:09:55.712 bw ( KiB/s): min= 4096, max= 4096, per=35.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.712 lat (usec) : 250=0.43%, 500=55.30%, 750=18.31%, 1000=25.89% 00:09:55.712 lat (msec) : 50=0.06% 00:09:55.712 cpu : usr=2.20%, sys=4.60%, ctx=1623, majf=0, minf=1 00:09:55.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 issued rwts: total=598,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.712 job3: (groupid=0, jobs=1): err= 0: pid=3515878: Tue Nov 26 19:47:56 2024 00:09:55.712 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:55.712 slat (nsec): min=7252, max=43640, avg=24253.91, stdev=6726.90 00:09:55.712 clat (usec): min=411, max=41039, avg=963.90, stdev=1780.80 00:09:55.712 lat (usec): min=439, max=41048, avg=988.16, stdev=1780.36 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 461], 5.00th=[ 603], 10.00th=[ 685], 20.00th=[ 750], 00:09:55.712 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 922], 60.00th=[ 971], 00:09:55.712 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:09:55.712 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[41157], 99.95th=[41157], 00:09:55.712 | 99.99th=[41157] 00:09:55.712 write: IOPS=917, BW=3668KiB/s (3756kB/s)(3672KiB/1001msec); 0 zone resets 00:09:55.712 slat (nsec): min=9850, max=66489, avg=27034.59, stdev=11044.26 00:09:55.712 clat (usec): min=127, max=1095, avg=499.93, stdev=142.40 00:09:55.712 lat (usec): min=161, max=1131, avg=526.96, stdev=148.77 00:09:55.712 clat percentiles (usec): 00:09:55.712 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 355], 00:09:55.712 | 30.00th=[ 408], 40.00th=[ 469], 50.00th=[ 502], 60.00th=[ 562], 00:09:55.712 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 717], 00:09:55.712 | 99.00th=[ 799], 99.50th=[ 865], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:55.712 | 99.99th=[ 1090] 00:09:55.712 bw ( KiB/s): min= 4096, max= 4096, per=35.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.712 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.712 lat (usec) : 250=0.84%, 500=31.40%, 750=37.20%, 1000=20.28% 00:09:55.712 lat (msec) : 2=10.21%, 50=0.07% 00:09:55.712 cpu : usr=2.70%, sys=3.10%, ctx=1432, majf=0, minf=1 00:09:55.712 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.712 issued rwts: total=512,918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.712 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.712 00:09:55.712 Run status group 0 (all jobs): 00:09:55.712 READ: bw=4462KiB/s (4570kB/s), 73.2KiB/s-2390KiB/s (75.0kB/s-2447kB/s), io=4632KiB (4743kB), run=1001-1038msec 00:09:55.712 WRITE: bw=11.2MiB/s (11.7MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=11.6MiB (12.1MB), run=1001-1038msec 00:09:55.712 00:09:55.712 Disk stats (read/write): 00:09:55.712 nvme0n1: ios=37/512, merge=0/0, ticks=1414/233, in_queue=1647, util=84.37% 00:09:55.712 nvme0n2: ios=76/512, merge=0/0, ticks=1048/230, in_queue=1278, util=88.69% 00:09:55.712 nvme0n3: ios=559/818, merge=0/0, ticks=531/352, in_queue=883, util=95.36% 00:09:55.712 nvme0n4: ios=566/550, merge=0/0, ticks=608/287, in_queue=895, util=97.33% 00:09:55.712 19:47:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:55.712 [global] 00:09:55.712 thread=1 00:09:55.712 invalidate=1 00:09:55.712 rw=write 00:09:55.712 time_based=1 00:09:55.712 runtime=1 00:09:55.712 ioengine=libaio 00:09:55.712 direct=1 00:09:55.712 bs=4096 00:09:55.712 iodepth=128 00:09:55.712 norandommap=0 00:09:55.712 numjobs=1 00:09:55.712 00:09:55.712 verify_dump=1 00:09:55.712 verify_backlog=512 00:09:55.712 verify_state_save=0 00:09:55.712 do_verify=1 00:09:55.712 verify=crc32c-intel 00:09:55.712 [job0] 00:09:55.712 filename=/dev/nvme0n1 00:09:55.712 [job1] 00:09:55.712 filename=/dev/nvme0n2 00:09:55.712 [job2] 00:09:55.712 filename=/dev/nvme0n3 00:09:55.712 [job3] 00:09:55.712 filename=/dev/nvme0n4 00:09:55.712 Could not set queue depth (nvme0n1) 00:09:55.712 Could not set queue depth (nvme0n2) 00:09:55.712 Could not set queue depth (nvme0n3) 00:09:55.712 Could not set queue depth (nvme0n4) 00:09:55.972 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.972 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.972 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.972 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.972 fio-3.35 00:09:55.972 Starting 4 threads 00:09:57.356 00:09:57.356 job0: (groupid=0, jobs=1): err= 0: pid=3516483: Tue Nov 26 19:47:57 2024 00:09:57.356 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:09:57.356 slat (nsec): min=966, max=14413k, avg=90543.87, stdev=690027.85 00:09:57.356 clat (usec): min=3107, max=55977, avg=11888.31, stdev=6786.24 00:09:57.356 lat (usec): min=3110, max=55986, avg=11978.86, stdev=6850.23 00:09:57.356 clat percentiles (usec): 00:09:57.356 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6390], 00:09:57.356 | 30.00th=[ 7111], 40.00th=[ 8717], 50.00th=[11469], 60.00th=[12780], 00:09:57.356 | 70.00th=[13304], 80.00th=[15401], 90.00th=[19792], 95.00th=[21365], 00:09:57.356 | 99.00th=[41681], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:09:57.356 | 99.99th=[55837] 00:09:57.356 write: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec); 0 zone resets 00:09:57.356 slat (nsec): min=1645, max=18215k, avg=143296.89, stdev=796854.76 00:09:57.356 clat (usec): min=369, max=64235, avg=18679.50, stdev=19054.89 00:09:57.356 lat (usec): min=1535, max=64247, avg=18822.79, stdev=19185.88 00:09:57.356 clat percentiles (usec): 00:09:57.356 | 1.00th=[ 2638], 5.00th=[ 3818], 10.00th=[ 5080], 20.00th=[ 5669], 00:09:57.356 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 9765], 60.00th=[11731], 00:09:57.356 | 70.00th=[14353], 80.00th=[34341], 90.00th=[56886], 95.00th=[59507], 00:09:57.356 | 99.00th=[63177], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:09:57.356 | 99.99th=[64226] 00:09:57.356 bw ( KiB/s): min=10424, max=22400, per=18.95%, avg=16412.00, stdev=8468.31, samples=2 00:09:57.356 iops : min= 2606, max= 5600, avg=4103.00, stdev=2117.08, samples=2 00:09:57.356 lat (usec) : 500=0.01% 00:09:57.356 lat (msec) : 2=0.12%, 4=2.97%, 10=45.20%, 20=34.89%, 50=8.49% 00:09:57.356 lat (msec) : 100=8.32% 00:09:57.356 cpu : usr=2.89%, sys=5.38%, ctx=346, majf=0, minf=1 00:09:57.356 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:57.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.356 issued rwts: total=4096,4198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.356 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.356 job1: (groupid=0, jobs=1): err= 0: pid=3516485: Tue Nov 26 19:47:57 2024 00:09:57.356 read: IOPS=7097, BW=27.7MiB/s (29.1MB/s)(28.0MiB/1010msec) 00:09:57.356 slat (nsec): min=961, max=11134k, avg=64412.73, stdev=463832.15 00:09:57.356 clat (usec): min=2515, max=28492, avg=8485.83, stdev=2995.15 00:09:57.356 lat (usec): min=2534, max=28499, avg=8550.24, stdev=3029.76 00:09:57.356 clat percentiles (usec): 00:09:57.356 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6456], 00:09:57.356 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8455], 00:09:57.356 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[11469], 95.00th=[14615], 00:09:57.356 | 99.00th=[21365], 99.50th=[21627], 99.90th=[24773], 99.95th=[28443], 00:09:57.356 | 99.99th=[28443] 00:09:57.356 write: IOPS=7593, BW=29.7MiB/s (31.1MB/s)(30.0MiB/1010msec); 0 zone resets 00:09:57.356 slat (nsec): min=1624, max=11357k, avg=64529.92, stdev=418842.02 00:09:57.356 clat (usec): min=1166, max=27054, avg=8775.42, stdev=4769.33 00:09:57.356 lat (usec): min=1177, max=27056, avg=8839.95, stdev=4801.00 00:09:57.356 clat percentiles (usec): 00:09:57.356 | 1.00th=[ 2900], 5.00th=[ 4015], 10.00th=[ 4555], 20.00th=[ 5211], 00:09:57.356 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6521], 60.00th=[ 8160], 00:09:57.356 | 70.00th=[ 9765], 80.00th=[12518], 90.00th=[16909], 95.00th=[19530], 00:09:57.356 | 99.00th=[22414], 99.50th=[23987], 99.90th=[26346], 99.95th=[27132], 00:09:57.356 | 99.99th=[27132] 00:09:57.357 bw ( KiB/s): min=29232, max=31104, per=34.82%, avg=30168.00, stdev=1323.70, samples=2 00:09:57.357 iops : min= 7308, max= 7776, avg=7542.00, stdev=330.93, samples=2 00:09:57.357 lat (msec) : 2=0.10%, 4=2.48%, 10=71.54%, 20=23.21%, 50=2.67% 00:09:57.357 cpu : usr=5.95%, sys=8.13%, ctx=449, majf=0, minf=1 00:09:57.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:57.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.357 issued rwts: total=7168,7669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.357 job2: (groupid=0, jobs=1): err= 0: pid=3516488: Tue Nov 26 19:47:57 2024 00:09:57.357 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:09:57.357 slat (nsec): min=978, max=15970k, avg=156831.57, stdev=1002705.10 00:09:57.357 clat (usec): min=6491, max=51139, avg=17888.71, stdev=10830.04 00:09:57.357 lat (usec): min=6495, max=51170, avg=18045.55, stdev=10945.54 00:09:57.357 clat percentiles (usec): 00:09:57.357 | 1.00th=[ 7111], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9503], 00:09:57.357 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[17171], 00:09:57.357 | 70.00th=[22676], 80.00th=[29492], 90.00th=[34866], 95.00th=[39584], 00:09:57.357 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47973], 99.95th=[51119], 00:09:57.357 | 99.99th=[51119] 00:09:57.357 write: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1007msec); 0 zone resets 00:09:57.357 slat (nsec): min=1709, max=25942k, avg=202612.84, stdev=968895.07 00:09:57.357 clat (usec): min=2387, max=62470, avg=28777.15, stdev=16343.97 00:09:57.357 lat (usec): min=2399, max=62479, avg=28979.76, stdev=16443.76 00:09:57.357 clat percentiles (usec): 00:09:57.357 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[12256], 00:09:57.357 | 30.00th=[12780], 40.00th=[17433], 50.00th=[24249], 60.00th=[34866], 00:09:57.357 | 70.00th=[43254], 80.00th=[47449], 90.00th=[50594], 95.00th=[54264], 00:09:57.357 | 99.00th=[58459], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:09:57.357 | 99.99th=[62653] 00:09:57.357 bw ( KiB/s): min=10296, max=11400, per=12.52%, avg=10848.00, stdev=780.65, samples=2 00:09:57.357 iops : min= 2574, max= 2850, avg=2712.00, stdev=195.16, samples=2 00:09:57.357 lat (msec) : 4=0.04%, 10=20.41%, 20=34.41%, 50=39.36%, 100=5.78% 00:09:57.357 cpu : usr=2.58%, sys=2.98%, ctx=380, majf=0, minf=2 00:09:57.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:57.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.357 issued rwts: total=2560,2839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.357 job3: (groupid=0, jobs=1): err= 0: pid=3516489: Tue Nov 26 19:47:57 2024 00:09:57.357 read: IOPS=6632, BW=25.9MiB/s (27.2MB/s)(26.1MiB/1008msec) 00:09:57.357 slat (nsec): min=1070, max=8046.1k, avg=71824.22, stdev=529737.93 00:09:57.357 clat (usec): min=1187, max=24734, avg=9648.41, stdev=3511.09 00:09:57.357 lat (usec): min=1216, max=24764, avg=9720.24, stdev=3548.21 00:09:57.357 clat percentiles (usec): 00:09:57.357 | 1.00th=[ 4817], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7046], 00:09:57.357 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 9372], 00:09:57.357 | 70.00th=[10814], 80.00th=[12518], 90.00th=[15008], 95.00th=[16712], 00:09:57.357 | 99.00th=[20579], 99.50th=[21890], 99.90th=[24249], 99.95th=[24511], 00:09:57.357 | 99.99th=[24773] 00:09:57.357 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:09:57.357 slat (nsec): min=1749, max=12435k, avg=65282.36, stdev=510234.85 00:09:57.357 clat (usec): min=1673, max=32128, avg=8820.97, stdev=4718.93 00:09:57.357 lat (usec): min=1681, max=32139, avg=8886.25, stdev=4751.60 00:09:57.357 clat percentiles (usec): 00:09:57.357 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 5538], 00:09:57.357 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 8455], 00:09:57.357 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[13042], 95.00th=[16319], 00:09:57.357 | 99.00th=[27919], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:09:57.357 | 99.99th=[32113] 00:09:57.357 bw ( KiB/s): min=27184, max=29384, per=32.65%, avg=28284.00, stdev=1555.63, samples=2 00:09:57.357 iops : min= 6796, max= 7346, avg=7071.00, stdev=388.91, samples=2 00:09:57.357 lat (msec) : 2=0.23%, 4=0.84%, 10=67.28%, 20=28.90%, 50=2.75% 00:09:57.357 cpu : usr=6.26%, sys=7.94%, ctx=342, majf=0, minf=1 00:09:57.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:57.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.357 issued rwts: total=6686,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.357 00:09:57.357 Run status group 0 (all jobs): 00:09:57.357 READ: bw=79.3MiB/s (83.2MB/s), 9.93MiB/s-27.7MiB/s (10.4MB/s-29.1MB/s), io=80.1MiB (84.0MB), run=1004-1010msec 00:09:57.357 WRITE: bw=84.6MiB/s (88.7MB/s), 11.0MiB/s-29.7MiB/s (11.5MB/s-31.1MB/s), io=85.4MiB (89.6MB), run=1004-1010msec 00:09:57.357 00:09:57.357 Disk stats (read/write): 00:09:57.357 nvme0n1: ios=3636/3727, merge=0/0, ticks=40376/57471, in_queue=97847, util=84.17% 00:09:57.357 nvme0n2: ios=6194/6383, merge=0/0, ticks=49584/50825, in_queue=100409, util=90.21% 00:09:57.357 nvme0n3: ios=2097/2175, merge=0/0, ticks=19002/33254, in_queue=52256, util=93.99% 00:09:57.357 nvme0n4: ios=5677/5639, merge=0/0, ticks=53755/47358, in_queue=101113, util=94.24% 00:09:57.357 19:47:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:57.357 [global] 00:09:57.357 thread=1 00:09:57.357 invalidate=1 00:09:57.357 rw=randwrite 00:09:57.357 time_based=1 00:09:57.357 runtime=1 00:09:57.357 ioengine=libaio 00:09:57.357 direct=1 00:09:57.357 bs=4096 00:09:57.357 iodepth=128 00:09:57.357 norandommap=0 00:09:57.357 numjobs=1 00:09:57.357 00:09:57.357 verify_dump=1 00:09:57.357 verify_backlog=512 00:09:57.357 verify_state_save=0 00:09:57.357 do_verify=1 00:09:57.357 verify=crc32c-intel 00:09:57.357 [job0] 00:09:57.357 filename=/dev/nvme0n1 00:09:57.357 [job1] 00:09:57.357 filename=/dev/nvme0n2 00:09:57.357 [job2] 00:09:57.357 filename=/dev/nvme0n3 00:09:57.357 [job3] 00:09:57.357 filename=/dev/nvme0n4 00:09:57.357 Could not set queue depth (nvme0n1) 00:09:57.357 Could not set queue depth (nvme0n2) 00:09:57.357 Could not set queue depth (nvme0n3) 00:09:57.357 Could not set queue depth (nvme0n4) 00:09:57.618 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.618 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.618 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.618 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.618 fio-3.35 00:09:57.618 Starting 4 threads 00:09:59.015 00:09:59.015 job0: (groupid=0, jobs=1): err= 0: pid=3517117: Tue Nov 26 19:47:59 2024 00:09:59.015 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:09:59.015 slat (nsec): min=915, max=46022k, avg=78023.66, stdev=738011.00 00:09:59.015 clat (usec): min=3079, max=64380, avg=9609.55, stdev=7373.07 00:09:59.015 lat (usec): min=3081, max=64407, avg=9687.57, stdev=7426.44 00:09:59.015 clat percentiles (usec): 00:09:59.015 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6259], 00:09:59.015 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8029], 00:09:59.015 | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[14746], 95.00th=[21627], 00:09:59.015 | 99.00th=[56886], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:09:59.015 | 99.99th=[64226] 00:09:59.015 write: IOPS=6666, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1003msec); 0 zone resets 00:09:59.015 slat (nsec): min=1570, max=19727k, avg=67632.74, stdev=532402.54 00:09:59.015 clat (usec): min=749, max=64188, avg=9233.70, stdev=7419.58 00:09:59.016 lat (usec): min=2459, max=64196, avg=9301.33, stdev=7460.91 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5800], 00:09:59.016 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7111], 00:09:59.016 | 70.00th=[ 7570], 80.00th=[ 9765], 90.00th=[16188], 95.00th=[27395], 00:09:59.016 | 99.00th=[40109], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:09:59.016 | 99.99th=[64226] 00:09:59.016 bw ( KiB/s): min=25464, max=27784, per=30.43%, avg=26624.00, stdev=1640.49, samples=2 00:09:59.016 iops : min= 6366, max= 6946, avg=6656.00, stdev=410.12, samples=2 00:09:59.016 lat (usec) : 750=0.01% 00:09:59.016 lat (msec) : 4=0.85%, 10=80.39%, 20=12.75%, 50=5.06%, 100=0.95% 00:09:59.016 cpu : usr=3.79%, sys=5.39%, ctx=615, majf=0, minf=1 00:09:59.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:59.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.016 issued rwts: total=6656,6686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.016 job1: (groupid=0, jobs=1): err= 0: pid=3517118: Tue Nov 26 19:47:59 2024 00:09:59.016 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:59.016 slat (nsec): min=942, max=17156k, avg=89431.06, stdev=657299.43 00:09:59.016 clat (usec): min=3483, max=75314, avg=11777.29, stdev=6698.64 00:09:59.016 lat (usec): min=3489, max=75315, avg=11866.73, stdev=6756.03 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 3884], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7570], 00:09:59.016 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9896], 60.00th=[10683], 00:09:59.016 | 70.00th=[12256], 80.00th=[13829], 90.00th=[19006], 95.00th=[25560], 00:09:59.016 | 99.00th=[38011], 99.50th=[41681], 99.90th=[57410], 99.95th=[57410], 00:09:59.016 | 99.99th=[74974] 00:09:59.016 write: IOPS=5727, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec); 0 zone resets 00:09:59.016 slat (nsec): min=1616, max=19603k, avg=80237.83, stdev=636446.90 00:09:59.016 clat (usec): min=915, max=47801, avg=10560.31, stdev=7446.09 00:09:59.016 lat (usec): min=1012, max=47849, avg=10640.55, stdev=7501.71 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 3392], 5.00th=[ 3851], 10.00th=[ 4424], 20.00th=[ 5997], 00:09:59.016 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8717], 00:09:59.016 | 70.00th=[ 9372], 80.00th=[13435], 90.00th=[20317], 95.00th=[28181], 00:09:59.016 | 99.00th=[39060], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:09:59.016 | 99.99th=[47973] 00:09:59.016 bw ( KiB/s): min=20480, max=24576, per=25.75%, avg=22528.00, stdev=2896.31, samples=2 00:09:59.016 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:59.016 lat (usec) : 1000=0.01% 00:09:59.016 lat (msec) : 2=0.07%, 4=4.09%, 10=58.01%, 20=28.06%, 50=9.69% 00:09:59.016 lat (msec) : 100=0.06% 00:09:59.016 cpu : usr=3.69%, sys=6.58%, ctx=387, majf=0, minf=1 00:09:59.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:59.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.016 issued rwts: total=5632,5750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.016 job2: (groupid=0, jobs=1): err= 0: pid=3517119: Tue Nov 26 19:47:59 2024 00:09:59.016 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:09:59.016 slat (nsec): min=955, max=17647k, avg=126169.93, stdev=868863.68 00:09:59.016 clat (usec): min=6984, max=58017, avg=17467.89, stdev=7350.38 00:09:59.016 lat (usec): min=6989, max=59470, avg=17594.06, stdev=7418.93 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10683], 00:09:59.016 | 30.00th=[12518], 40.00th=[15795], 50.00th=[16712], 60.00th=[17433], 00:09:59.016 | 70.00th=[19268], 80.00th=[22152], 90.00th=[26870], 95.00th=[31851], 00:09:59.016 | 99.00th=[39060], 99.50th=[42730], 99.90th=[57934], 99.95th=[57934], 00:09:59.016 | 99.99th=[57934] 00:09:59.016 write: IOPS=4000, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1011msec); 0 zone resets 00:09:59.016 slat (nsec): min=1639, max=11317k, avg=129209.48, stdev=713466.04 00:09:59.016 clat (usec): min=1188, max=51203, avg=16215.75, stdev=9922.76 00:09:59.016 lat (usec): min=1200, max=51233, avg=16344.96, stdev=9996.76 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 5735], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 9241], 00:09:59.016 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[12387], 60.00th=[14746], 00:09:59.016 | 70.00th=[17171], 80.00th=[25035], 90.00th=[29754], 95.00th=[38011], 00:09:59.016 | 99.00th=[47973], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:09:59.016 | 99.99th=[51119] 00:09:59.016 bw ( KiB/s): min=14960, max=16384, per=17.91%, avg=15672.00, stdev=1006.92, samples=2 00:09:59.016 iops : min= 3740, max= 4096, avg=3918.00, stdev=251.73, samples=2 00:09:59.016 lat (msec) : 2=0.03%, 4=0.17%, 10=26.05%, 20=46.32%, 50=27.02% 00:09:59.016 lat (msec) : 100=0.42% 00:09:59.016 cpu : usr=2.87%, sys=4.46%, ctx=336, majf=0, minf=1 00:09:59.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:59.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.016 issued rwts: total=3584,4045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.016 job3: (groupid=0, jobs=1): err= 0: pid=3517120: Tue Nov 26 19:47:59 2024 00:09:59.016 read: IOPS=5106, BW=19.9MiB/s (20.9MB/s)(20.1MiB/1008msec) 00:09:59.016 slat (nsec): min=943, max=15243k, avg=75715.98, stdev=692523.53 00:09:59.016 clat (usec): min=1540, max=46156, avg=11320.80, stdev=5775.87 00:09:59.016 lat (usec): min=1547, max=46164, avg=11396.52, stdev=5838.15 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 2147], 5.00th=[ 3949], 10.00th=[ 6259], 20.00th=[ 7635], 00:09:59.016 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:09:59.016 | 70.00th=[11338], 80.00th=[15270], 90.00th=[18744], 95.00th=[22414], 00:09:59.016 | 99.00th=[35390], 99.50th=[39584], 99.90th=[42730], 99.95th=[46400], 00:09:59.016 | 99.99th=[46400] 00:09:59.016 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:09:59.016 slat (nsec): min=1534, max=13723k, avg=80902.71, stdev=563709.22 00:09:59.016 clat (usec): min=870, max=40040, avg=12373.72, stdev=8477.28 00:09:59.016 lat (usec): min=903, max=40049, avg=12454.62, stdev=8539.03 00:09:59.016 clat percentiles (usec): 00:09:59.016 | 1.00th=[ 1336], 5.00th=[ 2868], 10.00th=[ 4686], 20.00th=[ 6194], 00:09:59.016 | 30.00th=[ 6849], 40.00th=[ 8356], 50.00th=[10028], 60.00th=[10683], 00:09:59.016 | 70.00th=[12911], 80.00th=[20055], 90.00th=[26870], 95.00th=[30540], 00:09:59.016 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:09:59.016 | 99.99th=[40109] 00:09:59.016 bw ( KiB/s): min=21728, max=22528, per=25.29%, avg=22128.00, stdev=565.69, samples=2 00:09:59.016 iops : min= 5432, max= 5632, avg=5532.00, stdev=141.42, samples=2 00:09:59.016 lat (usec) : 1000=0.26% 00:09:59.016 lat (msec) : 2=1.75%, 4=4.89%, 10=44.32%, 20=34.63%, 50=14.15% 00:09:59.016 cpu : usr=4.37%, sys=5.76%, ctx=395, majf=0, minf=2 00:09:59.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:59.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.016 issued rwts: total=5147,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.016 00:09:59.016 Run status group 0 (all jobs): 00:09:59.016 READ: bw=81.2MiB/s (85.2MB/s), 13.8MiB/s-25.9MiB/s (14.5MB/s-27.2MB/s), io=82.1MiB (86.1MB), run=1003-1011msec 00:09:59.016 WRITE: bw=85.4MiB/s (89.6MB/s), 15.6MiB/s-26.0MiB/s (16.4MB/s-27.3MB/s), io=86.4MiB (90.6MB), run=1003-1011msec 00:09:59.016 00:09:59.016 Disk stats (read/write): 00:09:59.016 nvme0n1: ios=5010/5120, merge=0/0, ticks=21718/17240, in_queue=38958, util=98.60% 00:09:59.016 nvme0n2: ios=4192/4608, merge=0/0, ticks=28457/24250, in_queue=52707, util=95.54% 00:09:59.016 nvme0n3: ios=3064/3079, merge=0/0, ticks=30941/25330, in_queue=56271, util=86.40% 00:09:59.016 nvme0n4: ios=3767/4096, merge=0/0, ticks=38095/52431, in_queue=90526, util=88.67% 00:09:59.016 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:59.016 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3517505 00:09:59.016 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:59.016 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:59.016 [global] 00:09:59.016 thread=1 00:09:59.016 invalidate=1 00:09:59.016 rw=read 00:09:59.016 time_based=1 00:09:59.016 runtime=10 00:09:59.016 ioengine=libaio 00:09:59.016 direct=1 00:09:59.016 bs=4096 00:09:59.016 iodepth=1 00:09:59.016 norandommap=1 00:09:59.016 numjobs=1 00:09:59.016 00:09:59.016 [job0] 00:09:59.016 filename=/dev/nvme0n1 00:09:59.016 [job1] 00:09:59.016 filename=/dev/nvme0n2 00:09:59.016 [job2] 00:09:59.016 filename=/dev/nvme0n3 00:09:59.016 [job3] 00:09:59.016 filename=/dev/nvme0n4 00:09:59.016 Could not set queue depth (nvme0n1) 00:09:59.016 Could not set queue depth (nvme0n2) 00:09:59.016 Could not set queue depth (nvme0n3) 00:09:59.016 Could not set queue depth (nvme0n4) 00:09:59.602 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.602 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.602 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.602 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.602 fio-3.35 00:09:59.602 Starting 4 threads 00:10:02.150 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:02.150 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:02.150 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1105920, buflen=4096 00:10:02.150 fio: pid=3517788, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.412 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11059200, buflen=4096 00:10:02.412 fio: pid=3517787, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.412 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.412 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:02.673 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7786496, buflen=4096 00:10:02.673 fio: pid=3517767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.673 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.673 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:02.673 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9728000, buflen=4096 00:10:02.673 fio: pid=3517777, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:02.673 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.673 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:02.934 00:10:02.935 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3517767: Tue Nov 26 19:48:03 2024 00:10:02.935 read: IOPS=642, BW=2568KiB/s (2630kB/s)(7604KiB/2961msec) 00:10:02.935 slat (usec): min=6, max=15839, avg=39.62, stdev=442.80 00:10:02.935 clat (usec): min=537, max=42072, avg=1506.37, stdev=4462.09 00:10:02.935 lat (usec): min=563, max=42097, avg=1546.00, stdev=4482.82 00:10:02.935 clat percentiles (usec): 00:10:02.935 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 922], 00:10:02.935 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1029], 00:10:02.935 | 70.00th=[ 1057], 80.00th=[ 1123], 90.00th=[ 1205], 95.00th=[ 1254], 00:10:02.935 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:02.935 | 99.99th=[42206] 00:10:02.935 bw ( KiB/s): min= 296, max= 3816, per=25.55%, avg=2348.80, stdev=1522.44, samples=5 00:10:02.935 iops : min= 74, max= 954, avg=587.20, stdev=380.61, samples=5 00:10:02.935 lat (usec) : 750=3.36%, 1000=46.42% 00:10:02.935 lat (msec) : 2=48.84%, 4=0.05%, 10=0.05%, 50=1.21% 00:10:02.935 cpu : usr=0.64%, sys=1.96%, ctx=1904, majf=0, minf=1 00:10:02.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.935 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3517777: Tue Nov 26 19:48:03 2024 00:10:02.935 read: IOPS=753, BW=3012KiB/s (3084kB/s)(9500KiB/3154msec) 00:10:02.935 slat (usec): min=6, max=22914, avg=70.33, stdev=868.88 00:10:02.935 clat (usec): min=429, max=42072, avg=1244.80, stdev=3425.86 00:10:02.935 lat (usec): min=462, max=42100, avg=1315.15, stdev=3530.92 00:10:02.935 clat percentiles (usec): 00:10:02.935 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 824], 20.00th=[ 889], 00:10:02.935 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:10:02.935 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:02.935 | 99.00th=[ 1254], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:02.935 | 99.99th=[42206] 00:10:02.935 bw ( KiB/s): min= 200, max= 4104, per=33.00%, avg=3033.83, stdev=1549.86, samples=6 00:10:02.935 iops : min= 50, max= 1026, avg=758.33, stdev=387.42, samples=6 00:10:02.935 lat (usec) : 500=0.08%, 750=3.41%, 1000=65.19% 00:10:02.935 lat (msec) : 2=30.51%, 10=0.04%, 50=0.72% 00:10:02.935 cpu : usr=1.01%, sys=3.39%, ctx=2386, majf=0, minf=2 00:10:02.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.935 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3517787: Tue Nov 26 19:48:03 2024 00:10:02.935 read: IOPS=968, BW=3874KiB/s (3967kB/s)(10.5MiB/2788msec) 00:10:02.935 slat (usec): min=5, max=21776, avg=40.75, stdev=507.41 00:10:02.935 clat (usec): min=352, max=41640, avg=977.43, stdev=1104.61 00:10:02.935 lat (usec): min=380, max=41653, avg=1018.18, stdev=1215.76 00:10:02.935 clat percentiles (usec): 00:10:02.935 | 1.00th=[ 635], 5.00th=[ 766], 10.00th=[ 824], 20.00th=[ 881], 00:10:02.935 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:10:02.935 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:10:02.935 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1450], 99.95th=[41157], 00:10:02.935 | 99.99th=[41681] 00:10:02.935 bw ( KiB/s): min= 3696, max= 4128, per=42.93%, avg=3945.60, stdev=216.03, samples=5 00:10:02.935 iops : min= 924, max= 1032, avg=986.40, stdev=54.01, samples=5 00:10:02.935 lat (usec) : 500=0.04%, 750=4.18%, 1000=68.23% 00:10:02.935 lat (msec) : 2=27.43%, 50=0.07% 00:10:02.935 cpu : usr=1.79%, sys=3.91%, ctx=2703, majf=0, minf=2 00:10:02.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 issued rwts: total=2701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.935 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3517788: Tue Nov 26 19:48:03 2024 00:10:02.935 read: IOPS=103, BW=412KiB/s (421kB/s)(1080KiB/2624msec) 00:10:02.935 slat (nsec): min=3604, max=45804, avg=24379.79, stdev=6591.15 00:10:02.935 clat (usec): min=335, max=41983, avg=9609.78, stdev=16598.32 00:10:02.935 lat (usec): min=346, max=42009, avg=9634.15, stdev=16599.74 00:10:02.935 clat percentiles (usec): 00:10:02.935 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 717], 20.00th=[ 881], 00:10:02.935 | 30.00th=[ 947], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1057], 00:10:02.935 | 70.00th=[ 1090], 80.00th=[40633], 90.00th=[41157], 95.00th=[41681], 00:10:02.935 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:02.935 | 99.99th=[42206] 00:10:02.935 bw ( KiB/s): min= 128, max= 1240, per=4.65%, avg=427.20, stdev=464.71, samples=5 00:10:02.935 iops : min= 32, max= 310, avg=106.80, stdev=116.18, samples=5 00:10:02.935 lat (usec) : 500=0.74%, 750=10.70%, 1000=30.63% 00:10:02.935 lat (msec) : 2=36.16%, 50=21.40% 00:10:02.935 cpu : usr=0.04%, sys=0.34%, ctx=271, majf=0, minf=2 00:10:02.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.935 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.935 00:10:02.935 Run status group 0 (all jobs): 00:10:02.935 READ: bw=9190KiB/s (9410kB/s), 412KiB/s-3874KiB/s (421kB/s-3967kB/s), io=28.3MiB (29.7MB), run=2624-3154msec 00:10:02.935 00:10:02.935 Disk stats (read/write): 00:10:02.935 nvme0n1: ios=1786/0, merge=0/0, ticks=2726/0, in_queue=2726, util=93.92% 00:10:02.935 nvme0n2: ios=2371/0, merge=0/0, ticks=3392/0, in_queue=3392, util=97.71% 00:10:02.935 nvme0n3: ios=2555/0, merge=0/0, ticks=2389/0, in_queue=2389, util=96.03% 00:10:02.935 nvme0n4: ios=269/0, merge=0/0, ticks=2553/0, in_queue=2553, util=96.46% 00:10:02.935 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.935 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:03.196 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.196 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:03.457 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.457 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:03.457 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.457 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3517505 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:03.718 nvmf hotplug test: fio failed as expected 00:10:03.718 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.978 rmmod nvme_tcp 00:10:03.978 rmmod nvme_fabrics 00:10:03.978 rmmod nvme_keyring 00:10:03.978 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3513204 ']' 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3513204 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3513204 ']' 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3513204 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.979 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3513204 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3513204' 00:10:04.240 killing process with pid 3513204 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3513204 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3513204 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.240 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.792 00:10:06.792 real 0m29.611s 00:10:06.792 user 2m38.603s 00:10:06.792 sys 0m9.703s 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 ************************************ 00:10:06.792 END TEST nvmf_fio_target 00:10:06.792 ************************************ 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 ************************************ 00:10:06.792 START TEST nvmf_bdevio 00:10:06.792 ************************************ 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:06.792 * Looking for test storage... 00:10:06.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.792 --rc genhtml_branch_coverage=1 00:10:06.792 --rc genhtml_function_coverage=1 00:10:06.792 --rc genhtml_legend=1 00:10:06.792 --rc geninfo_all_blocks=1 00:10:06.792 --rc geninfo_unexecuted_blocks=1 00:10:06.792 00:10:06.792 ' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.792 --rc genhtml_branch_coverage=1 00:10:06.792 --rc genhtml_function_coverage=1 00:10:06.792 --rc genhtml_legend=1 00:10:06.792 --rc geninfo_all_blocks=1 00:10:06.792 --rc geninfo_unexecuted_blocks=1 00:10:06.792 00:10:06.792 ' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.792 --rc genhtml_branch_coverage=1 00:10:06.792 --rc genhtml_function_coverage=1 00:10:06.792 --rc genhtml_legend=1 00:10:06.792 --rc geninfo_all_blocks=1 00:10:06.792 --rc geninfo_unexecuted_blocks=1 00:10:06.792 00:10:06.792 ' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.792 --rc genhtml_branch_coverage=1 00:10:06.792 --rc genhtml_function_coverage=1 00:10:06.792 --rc genhtml_legend=1 00:10:06.792 --rc geninfo_all_blocks=1 00:10:06.792 --rc geninfo_unexecuted_blocks=1 00:10:06.792 00:10:06.792 ' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.792 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.793 19:48:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:15.021 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:15.021 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:15.021 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:15.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:15.021 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:15.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:10:15.021 00:10:15.021 --- 10.0.0.2 ping statistics --- 00:10:15.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.022 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:10:15.022 00:10:15.022 --- 10.0.0.1 ping statistics --- 00:10:15.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.022 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3523505 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3523505 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3523505 ']' 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.022 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.022 [2024-11-26 19:48:14.906937] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:10:15.022 [2024-11-26 19:48:14.907006] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.022 [2024-11-26 19:48:15.006259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.022 [2024-11-26 19:48:15.060386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.022 [2024-11-26 19:48:15.060438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.022 [2024-11-26 19:48:15.060446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.022 [2024-11-26 19:48:15.060454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.022 [2024-11-26 19:48:15.060460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.022 [2024-11-26 19:48:15.062907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.022 [2024-11-26 19:48:15.063071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.022 [2024-11-26 19:48:15.063231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.022 [2024-11-26 19:48:15.063231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.022 [2024-11-26 19:48:15.784875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.022 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.284 Malloc0 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.284 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.285 [2024-11-26 19:48:15.865976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.285 { 00:10:15.285 "params": { 00:10:15.285 "name": "Nvme$subsystem", 00:10:15.285 "trtype": "$TEST_TRANSPORT", 00:10:15.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.285 "adrfam": "ipv4", 00:10:15.285 "trsvcid": "$NVMF_PORT", 00:10:15.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.285 "hdgst": ${hdgst:-false}, 00:10:15.285 "ddgst": ${ddgst:-false} 00:10:15.285 }, 00:10:15.285 "method": "bdev_nvme_attach_controller" 00:10:15.285 } 00:10:15.285 EOF 00:10:15.285 )") 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:15.285 19:48:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.285 "params": { 00:10:15.285 "name": "Nvme1", 00:10:15.285 "trtype": "tcp", 00:10:15.285 "traddr": "10.0.0.2", 00:10:15.285 "adrfam": "ipv4", 00:10:15.285 "trsvcid": "4420", 00:10:15.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.285 "hdgst": false, 00:10:15.285 "ddgst": false 00:10:15.285 }, 00:10:15.285 "method": "bdev_nvme_attach_controller" 00:10:15.285 }' 00:10:15.285 [2024-11-26 19:48:15.924652] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:10:15.285 [2024-11-26 19:48:15.924721] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523790 ] 00:10:15.285 [2024-11-26 19:48:16.019222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.285 [2024-11-26 19:48:16.075007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.285 [2024-11-26 19:48:16.075195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.285 [2024-11-26 19:48:16.075245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.546 I/O targets: 00:10:15.546 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:15.546 00:10:15.546 00:10:15.546 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.546 http://cunit.sourceforge.net/ 00:10:15.546 00:10:15.546 00:10:15.546 Suite: bdevio tests on: Nvme1n1 00:10:15.546 Test: blockdev write read block ...passed 00:10:15.806 Test: blockdev write zeroes read block ...passed 00:10:15.807 Test: blockdev write zeroes read no split ...passed 00:10:15.807 Test: blockdev write zeroes read split ...passed 00:10:15.807 Test: blockdev write zeroes read split partial ...passed 00:10:15.807 Test: blockdev reset ...[2024-11-26 19:48:16.463979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:15.807 [2024-11-26 19:48:16.464067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ca970 (9): Bad file descriptor 00:10:15.807 [2024-11-26 19:48:16.476297] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:15.807 passed 00:10:15.807 Test: blockdev write read 8 blocks ...passed 00:10:15.807 Test: blockdev write read size > 128k ...passed 00:10:15.807 Test: blockdev write read invalid size ...passed 00:10:15.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:15.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:15.807 Test: blockdev write read max offset ...passed 00:10:16.068 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.068 Test: blockdev writev readv 8 blocks ...passed 00:10:16.068 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.068 Test: blockdev writev readv block ...passed 00:10:16.068 Test: blockdev writev readv size > 128k ...passed 00:10:16.068 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.068 Test: blockdev comparev and writev ...[2024-11-26 19:48:16.783611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.783659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.783676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.783686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.784228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.784241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.784255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.784810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.784823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.784837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.784845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.785377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.785389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.785403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.068 [2024-11-26 19:48:16.785411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:16.068 passed 00:10:16.068 Test: blockdev nvme passthru rw ...passed 00:10:16.068 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:48:16.869781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.068 [2024-11-26 19:48:16.869805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.870170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.068 [2024-11-26 19:48:16.870183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.870577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.068 [2024-11-26 19:48:16.870588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:16.068 [2024-11-26 19:48:16.870962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:16.068 [2024-11-26 19:48:16.870973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:16.068 passed 00:10:16.328 Test: blockdev nvme admin passthru ...passed 00:10:16.328 Test: blockdev copy ...passed 00:10:16.328 00:10:16.328 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.328 suites 1 1 n/a 0 0 00:10:16.328 tests 23 23 23 0 0 00:10:16.328 asserts 152 152 152 0 n/a 00:10:16.329 00:10:16.329 Elapsed time = 1.284 seconds 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.329 rmmod nvme_tcp 00:10:16.329 rmmod nvme_fabrics 00:10:16.329 rmmod nvme_keyring 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3523505 ']' 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3523505 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3523505 ']' 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3523505 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.329 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3523505 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3523505' 00:10:16.590 killing process with pid 3523505 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3523505 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3523505 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.590 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.153 19:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.153 00:10:19.153 real 0m12.294s 00:10:19.153 user 0m13.423s 00:10:19.153 sys 0m6.353s 00:10:19.153 19:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.153 19:48:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.153 ************************************ 00:10:19.153 END TEST nvmf_bdevio 00:10:19.153 ************************************ 00:10:19.153 19:48:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.153 00:10:19.154 real 5m6.024s 00:10:19.154 user 11m53.643s 00:10:19.154 sys 1m53.877s 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.154 ************************************ 00:10:19.154 END TEST nvmf_target_core 00:10:19.154 ************************************ 00:10:19.154 19:48:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.154 19:48:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.154 19:48:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.154 19:48:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.154 ************************************ 00:10:19.154 START TEST nvmf_target_extra 00:10:19.154 ************************************ 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.154 * Looking for test storage... 00:10:19.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.154 --rc genhtml_branch_coverage=1 00:10:19.154 --rc genhtml_function_coverage=1 00:10:19.154 --rc genhtml_legend=1 00:10:19.154 --rc geninfo_all_blocks=1 00:10:19.154 --rc geninfo_unexecuted_blocks=1 00:10:19.154 00:10:19.154 ' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.154 --rc genhtml_branch_coverage=1 00:10:19.154 --rc genhtml_function_coverage=1 00:10:19.154 --rc genhtml_legend=1 00:10:19.154 --rc geninfo_all_blocks=1 00:10:19.154 --rc geninfo_unexecuted_blocks=1 00:10:19.154 00:10:19.154 ' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.154 --rc genhtml_branch_coverage=1 00:10:19.154 --rc genhtml_function_coverage=1 00:10:19.154 --rc genhtml_legend=1 00:10:19.154 --rc geninfo_all_blocks=1 00:10:19.154 --rc geninfo_unexecuted_blocks=1 00:10:19.154 00:10:19.154 ' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.154 --rc genhtml_branch_coverage=1 00:10:19.154 --rc genhtml_function_coverage=1 00:10:19.154 --rc genhtml_legend=1 00:10:19.154 --rc geninfo_all_blocks=1 00:10:19.154 --rc geninfo_unexecuted_blocks=1 00:10:19.154 00:10:19.154 ' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.154 19:48:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.155 ************************************ 00:10:19.155 START TEST nvmf_example 00:10:19.155 ************************************ 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.155 * Looking for test storage... 00:10:19.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.155 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.417 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.417 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.417 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.417 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.417 --rc genhtml_branch_coverage=1 00:10:19.418 --rc genhtml_function_coverage=1 00:10:19.418 --rc genhtml_legend=1 00:10:19.418 --rc geninfo_all_blocks=1 00:10:19.418 --rc geninfo_unexecuted_blocks=1 00:10:19.418 00:10:19.418 ' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.418 --rc genhtml_branch_coverage=1 00:10:19.418 --rc genhtml_function_coverage=1 00:10:19.418 --rc genhtml_legend=1 00:10:19.418 --rc geninfo_all_blocks=1 00:10:19.418 --rc geninfo_unexecuted_blocks=1 00:10:19.418 00:10:19.418 ' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.418 --rc genhtml_branch_coverage=1 00:10:19.418 --rc genhtml_function_coverage=1 00:10:19.418 --rc genhtml_legend=1 00:10:19.418 --rc geninfo_all_blocks=1 00:10:19.418 --rc geninfo_unexecuted_blocks=1 00:10:19.418 00:10:19.418 ' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.418 --rc genhtml_branch_coverage=1 00:10:19.418 --rc genhtml_function_coverage=1 00:10:19.418 --rc genhtml_legend=1 00:10:19.418 --rc geninfo_all_blocks=1 00:10:19.418 --rc geninfo_unexecuted_blocks=1 00:10:19.418 00:10:19.418 ' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.418 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:27.562 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:27.562 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:27.562 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:27.562 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.562 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:10:27.563 00:10:27.563 --- 10.0.0.2 ping statistics --- 00:10:27.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.563 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:27.563 00:10:27.563 --- 10.0.0.1 ping statistics --- 00:10:27.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.563 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3528500 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3528500 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3528500 ']' 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.563 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.824 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:28.086 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.089 Initializing NVMe Controllers 00:10:38.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.089 Initialization complete. Launching workers. 00:10:38.089 ======================================================== 00:10:38.089 Latency(us) 00:10:38.089 Device Information : IOPS MiB/s Average min max 00:10:38.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18942.26 73.99 3378.35 636.84 15501.87 00:10:38.089 ======================================================== 00:10:38.089 Total : 18942.26 73.99 3378.35 636.84 15501.87 00:10:38.089 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.089 rmmod nvme_tcp 00:10:38.089 rmmod nvme_fabrics 00:10:38.089 rmmod nvme_keyring 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3528500 ']' 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3528500 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3528500 ']' 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3528500 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:38.089 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3528500 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3528500' 00:10:38.350 killing process with pid 3528500 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3528500 00:10:38.350 19:48:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3528500 00:10:38.350 nvmf threads initialize successfully 00:10:38.350 bdev subsystem init successfully 00:10:38.350 created a nvmf target service 00:10:38.350 create targets's poll groups done 00:10:38.350 all subsystems of target started 00:10:38.350 nvmf target is running 00:10:38.350 all subsystems of target stopped 00:10:38.350 destroy targets's poll groups done 00:10:38.350 destroyed the nvmf target service 00:10:38.350 bdev subsystem finish successfully 00:10:38.350 nvmf threads destroy successfully 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.350 19:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.900 00:10:40.900 real 0m21.392s 00:10:40.900 user 0m46.381s 00:10:40.900 sys 0m7.041s 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:40.900 ************************************ 00:10:40.900 END TEST nvmf_example 00:10:40.900 ************************************ 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.900 ************************************ 00:10:40.900 START TEST nvmf_filesystem 00:10:40.900 ************************************ 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:40.900 * Looking for test storage... 00:10:40.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.900 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.900 --rc genhtml_branch_coverage=1 00:10:40.900 --rc genhtml_function_coverage=1 00:10:40.901 --rc genhtml_legend=1 00:10:40.901 --rc geninfo_all_blocks=1 00:10:40.901 --rc geninfo_unexecuted_blocks=1 00:10:40.901 00:10:40.901 ' 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.901 --rc genhtml_branch_coverage=1 00:10:40.901 --rc genhtml_function_coverage=1 00:10:40.901 --rc genhtml_legend=1 00:10:40.901 --rc geninfo_all_blocks=1 00:10:40.901 --rc geninfo_unexecuted_blocks=1 00:10:40.901 00:10:40.901 ' 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.901 --rc genhtml_branch_coverage=1 00:10:40.901 --rc genhtml_function_coverage=1 00:10:40.901 --rc genhtml_legend=1 00:10:40.901 --rc geninfo_all_blocks=1 00:10:40.901 --rc geninfo_unexecuted_blocks=1 00:10:40.901 00:10:40.901 ' 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.901 --rc genhtml_branch_coverage=1 00:10:40.901 --rc genhtml_function_coverage=1 00:10:40.901 --rc genhtml_legend=1 00:10:40.901 --rc geninfo_all_blocks=1 00:10:40.901 --rc geninfo_unexecuted_blocks=1 00:10:40.901 00:10:40.901 ' 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:40.901 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:40.902 #define SPDK_CONFIG_H 00:10:40.902 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:40.902 #define SPDK_CONFIG_APPS 1 00:10:40.902 #define SPDK_CONFIG_ARCH native 00:10:40.902 #undef SPDK_CONFIG_ASAN 00:10:40.902 #undef SPDK_CONFIG_AVAHI 00:10:40.902 #undef SPDK_CONFIG_CET 00:10:40.902 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:40.902 #define SPDK_CONFIG_COVERAGE 1 00:10:40.902 #define SPDK_CONFIG_CROSS_PREFIX 00:10:40.902 #undef SPDK_CONFIG_CRYPTO 00:10:40.902 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:40.902 #undef SPDK_CONFIG_CUSTOMOCF 00:10:40.902 #undef SPDK_CONFIG_DAOS 00:10:40.902 #define SPDK_CONFIG_DAOS_DIR 00:10:40.902 #define SPDK_CONFIG_DEBUG 1 00:10:40.902 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:40.902 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:40.902 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:40.902 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:40.902 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:40.902 #undef SPDK_CONFIG_DPDK_UADK 00:10:40.902 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:40.902 #define SPDK_CONFIG_EXAMPLES 1 00:10:40.902 #undef SPDK_CONFIG_FC 00:10:40.902 #define SPDK_CONFIG_FC_PATH 00:10:40.902 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:40.902 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:40.902 #define SPDK_CONFIG_FSDEV 1 00:10:40.902 #undef SPDK_CONFIG_FUSE 00:10:40.902 #undef SPDK_CONFIG_FUZZER 00:10:40.902 #define SPDK_CONFIG_FUZZER_LIB 00:10:40.902 #undef SPDK_CONFIG_GOLANG 00:10:40.902 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:40.902 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:40.902 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:40.902 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:40.902 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:40.902 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:40.902 #undef SPDK_CONFIG_HAVE_LZ4 00:10:40.902 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:40.902 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:40.902 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:40.902 #define SPDK_CONFIG_IDXD 1 00:10:40.902 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:40.902 #undef SPDK_CONFIG_IPSEC_MB 00:10:40.902 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:40.902 #define SPDK_CONFIG_ISAL 1 00:10:40.902 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:40.902 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:40.902 #define SPDK_CONFIG_LIBDIR 00:10:40.902 #undef SPDK_CONFIG_LTO 00:10:40.902 #define SPDK_CONFIG_MAX_LCORES 128 00:10:40.902 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:40.902 #define SPDK_CONFIG_NVME_CUSE 1 00:10:40.902 #undef SPDK_CONFIG_OCF 00:10:40.902 #define SPDK_CONFIG_OCF_PATH 00:10:40.902 #define SPDK_CONFIG_OPENSSL_PATH 00:10:40.902 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:40.902 #define SPDK_CONFIG_PGO_DIR 00:10:40.902 #undef SPDK_CONFIG_PGO_USE 00:10:40.902 #define SPDK_CONFIG_PREFIX /usr/local 00:10:40.902 #undef SPDK_CONFIG_RAID5F 00:10:40.902 #undef SPDK_CONFIG_RBD 00:10:40.902 #define SPDK_CONFIG_RDMA 1 00:10:40.902 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:40.902 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:40.902 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:40.902 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:40.902 #define SPDK_CONFIG_SHARED 1 00:10:40.902 #undef SPDK_CONFIG_SMA 00:10:40.902 #define SPDK_CONFIG_TESTS 1 00:10:40.902 #undef SPDK_CONFIG_TSAN 00:10:40.902 #define SPDK_CONFIG_UBLK 1 00:10:40.902 #define SPDK_CONFIG_UBSAN 1 00:10:40.902 #undef SPDK_CONFIG_UNIT_TESTS 00:10:40.902 #undef SPDK_CONFIG_URING 00:10:40.902 #define SPDK_CONFIG_URING_PATH 00:10:40.902 #undef SPDK_CONFIG_URING_ZNS 00:10:40.902 #undef SPDK_CONFIG_USDT 00:10:40.902 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:40.902 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:40.902 #define SPDK_CONFIG_VFIO_USER 1 00:10:40.902 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:40.902 #define SPDK_CONFIG_VHOST 1 00:10:40.902 #define SPDK_CONFIG_VIRTIO 1 00:10:40.902 #undef SPDK_CONFIG_VTUNE 00:10:40.902 #define SPDK_CONFIG_VTUNE_DIR 00:10:40.902 #define SPDK_CONFIG_WERROR 1 00:10:40.902 #define SPDK_CONFIG_WPDK_DIR 00:10:40.902 #undef SPDK_CONFIG_XNVME 00:10:40.902 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:40.902 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:40.903 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:40.904 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3531289 ]] 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3531289 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZaPLcx 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZaPLcx/tests/target /tmp/spdk.ZaPLcx 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118274068480 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11082440704 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677244928 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1011712 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:10:40.905 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:40.906 * Looking for test storage... 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118274068480 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13297033216 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.906 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.168 --rc genhtml_branch_coverage=1 00:10:41.168 --rc genhtml_function_coverage=1 00:10:41.168 --rc genhtml_legend=1 00:10:41.168 --rc geninfo_all_blocks=1 00:10:41.168 --rc geninfo_unexecuted_blocks=1 00:10:41.168 00:10:41.168 ' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.168 --rc genhtml_branch_coverage=1 00:10:41.168 --rc genhtml_function_coverage=1 00:10:41.168 --rc genhtml_legend=1 00:10:41.168 --rc geninfo_all_blocks=1 00:10:41.168 --rc geninfo_unexecuted_blocks=1 00:10:41.168 00:10:41.168 ' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.168 --rc genhtml_branch_coverage=1 00:10:41.168 --rc genhtml_function_coverage=1 00:10:41.168 --rc genhtml_legend=1 00:10:41.168 --rc geninfo_all_blocks=1 00:10:41.168 --rc geninfo_unexecuted_blocks=1 00:10:41.168 00:10:41.168 ' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.168 --rc genhtml_branch_coverage=1 00:10:41.168 --rc genhtml_function_coverage=1 00:10:41.168 --rc genhtml_legend=1 00:10:41.168 --rc geninfo_all_blocks=1 00:10:41.168 --rc geninfo_unexecuted_blocks=1 00:10:41.168 00:10:41.168 ' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.168 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.169 19:48:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.320 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:49.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:49.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:49.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.321 19:48:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:49.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.321 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:10:49.321 00:10:49.321 --- 10.0.0.2 ping statistics --- 00:10:49.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.322 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:10:49.322 00:10:49.322 --- 10.0.0.1 ping statistics --- 00:10:49.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.322 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.322 ************************************ 00:10:49.322 START TEST nvmf_filesystem_no_in_capsule 00:10:49.322 ************************************ 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3534940 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3534940 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3534940 ']' 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.322 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.322 [2024-11-26 19:48:49.451547] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:10:49.322 [2024-11-26 19:48:49.451611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.322 [2024-11-26 19:48:49.552506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.322 [2024-11-26 19:48:49.606369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.322 [2024-11-26 19:48:49.606423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.322 [2024-11-26 19:48:49.606432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.322 [2024-11-26 19:48:49.606440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.322 [2024-11-26 19:48:49.606447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.322 [2024-11-26 19:48:49.608795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.322 [2024-11-26 19:48:49.608955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.322 [2024-11-26 19:48:49.609474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.322 [2024-11-26 19:48:49.609565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.583 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.584 [2024-11-26 19:48:50.331606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.584 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 Malloc1 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 [2024-11-26 19:48:50.491453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:49.846 { 00:10:49.846 "name": "Malloc1", 00:10:49.846 "aliases": [ 00:10:49.846 "1bf21e0b-46ff-4864-b0f6-f9ac855dce61" 00:10:49.846 ], 00:10:49.846 "product_name": "Malloc disk", 00:10:49.846 "block_size": 512, 00:10:49.846 "num_blocks": 1048576, 00:10:49.846 "uuid": "1bf21e0b-46ff-4864-b0f6-f9ac855dce61", 00:10:49.846 "assigned_rate_limits": { 00:10:49.846 "rw_ios_per_sec": 0, 00:10:49.846 "rw_mbytes_per_sec": 0, 00:10:49.846 "r_mbytes_per_sec": 0, 00:10:49.846 "w_mbytes_per_sec": 0 00:10:49.846 }, 00:10:49.846 "claimed": true, 00:10:49.846 "claim_type": "exclusive_write", 00:10:49.846 "zoned": false, 00:10:49.846 "supported_io_types": { 00:10:49.846 "read": true, 00:10:49.846 "write": true, 00:10:49.846 "unmap": true, 00:10:49.846 "flush": true, 00:10:49.846 "reset": true, 00:10:49.846 "nvme_admin": false, 00:10:49.846 "nvme_io": false, 00:10:49.846 "nvme_io_md": false, 00:10:49.846 "write_zeroes": true, 00:10:49.846 "zcopy": true, 00:10:49.846 "get_zone_info": false, 00:10:49.846 "zone_management": false, 00:10:49.846 "zone_append": false, 00:10:49.846 "compare": false, 00:10:49.846 "compare_and_write": false, 00:10:49.846 "abort": true, 00:10:49.846 "seek_hole": false, 00:10:49.846 "seek_data": false, 00:10:49.846 "copy": true, 00:10:49.846 "nvme_iov_md": false 00:10:49.846 }, 00:10:49.846 "memory_domains": [ 00:10:49.846 { 00:10:49.846 "dma_device_id": "system", 00:10:49.846 "dma_device_type": 1 00:10:49.846 }, 00:10:49.846 { 00:10:49.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.846 "dma_device_type": 2 00:10:49.846 } 00:10:49.846 ], 00:10:49.846 "driver_specific": {} 00:10:49.846 } 00:10:49.846 ]' 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:49.846 19:48:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.761 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.761 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:51.761 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.761 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:51.761 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:53.675 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:54.246 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.188 ************************************ 00:10:55.188 START TEST filesystem_ext4 00:10:55.188 ************************************ 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:55.188 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.188 mke2fs 1.47.0 (5-Feb-2023) 00:10:55.188 Discarding device blocks: 0/522240 done 00:10:55.448 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:55.448 Filesystem UUID: bc4d35f2-078f-4f35-8260-0128b377677d 00:10:55.448 Superblock backups stored on blocks: 00:10:55.448 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:55.448 00:10:55.448 Allocating group tables: 0/64 done 00:10:55.448 Writing inode tables: 0/64 done 00:10:58.746 Creating journal (8192 blocks): done 00:11:00.548 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:00.548 00:11:00.548 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:00.548 19:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3534940 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.129 00:11:07.129 real 0m11.406s 00:11:07.129 user 0m0.029s 00:11:07.129 sys 0m0.076s 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 ************************************ 00:11:07.129 END TEST filesystem_ext4 00:11:07.129 ************************************ 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.129 ************************************ 00:11:07.129 START TEST filesystem_btrfs 00:11:07.129 ************************************ 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:07.129 btrfs-progs v6.8.1 00:11:07.129 See https://btrfs.readthedocs.io for more information. 00:11:07.129 00:11:07.129 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:07.129 NOTE: several default settings have changed in version 5.15, please make sure 00:11:07.129 this does not affect your deployments: 00:11:07.129 - DUP for metadata (-m dup) 00:11:07.129 - enabled no-holes (-O no-holes) 00:11:07.129 - enabled free-space-tree (-R free-space-tree) 00:11:07.129 00:11:07.129 Label: (null) 00:11:07.129 UUID: e40718a3-94e9-4012-9bc6-7a7d99391e27 00:11:07.129 Node size: 16384 00:11:07.129 Sector size: 4096 (CPU page size: 4096) 00:11:07.129 Filesystem size: 510.00MiB 00:11:07.129 Block group profiles: 00:11:07.129 Data: single 8.00MiB 00:11:07.129 Metadata: DUP 32.00MiB 00:11:07.129 System: DUP 8.00MiB 00:11:07.129 SSD detected: yes 00:11:07.129 Zoned device: no 00:11:07.129 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:07.129 Checksum: crc32c 00:11:07.129 Number of devices: 1 00:11:07.129 Devices: 00:11:07.129 ID SIZE PATH 00:11:07.129 1 510.00MiB /dev/nvme0n1p1 00:11:07.129 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:07.129 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3534940 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.071 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.364 00:11:08.364 real 0m1.468s 00:11:08.364 user 0m0.030s 00:11:08.364 sys 0m0.123s 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.364 ************************************ 00:11:08.364 END TEST filesystem_btrfs 00:11:08.364 ************************************ 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.364 ************************************ 00:11:08.364 START TEST filesystem_xfs 00:11:08.364 ************************************ 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:08.364 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:08.364 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:08.365 = sectsz=512 attr=2, projid32bit=1 00:11:08.365 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:08.365 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:08.365 data = bsize=4096 blocks=130560, imaxpct=25 00:11:08.365 = sunit=0 swidth=0 blks 00:11:08.365 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:08.365 log =internal log bsize=4096 blocks=16384, version=2 00:11:08.365 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:08.365 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.337 Discarding blocks...Done. 00:11:09.337 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:09.337 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3534940 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.251 00:11:11.251 real 0m2.965s 00:11:11.251 user 0m0.025s 00:11:11.251 sys 0m0.077s 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 ************************************ 00:11:11.251 END TEST filesystem_xfs 00:11:11.251 ************************************ 00:11:11.251 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:11.512 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:11.512 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3534940 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3534940 ']' 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3534940 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3534940 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3534940' 00:11:11.774 killing process with pid 3534940 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3534940 00:11:11.774 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3534940 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:12.034 00:11:12.034 real 0m23.278s 00:11:12.034 user 1m31.971s 00:11:12.034 sys 0m1.599s 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.034 ************************************ 00:11:12.034 END TEST nvmf_filesystem_no_in_capsule 00:11:12.034 ************************************ 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.034 ************************************ 00:11:12.034 START TEST nvmf_filesystem_in_capsule 00:11:12.034 ************************************ 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3539879 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3539879 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3539879 ']' 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.034 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.034 [2024-11-26 19:49:12.807902] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:11:12.034 [2024-11-26 19:49:12.807949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.294 [2024-11-26 19:49:12.898468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.294 [2024-11-26 19:49:12.929281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.294 [2024-11-26 19:49:12.929313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.294 [2024-11-26 19:49:12.929319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.294 [2024-11-26 19:49:12.929324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.294 [2024-11-26 19:49:12.929329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.294 [2024-11-26 19:49:12.930591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.294 [2024-11-26 19:49:12.930716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.294 [2024-11-26 19:49:12.930864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.294 [2024-11-26 19:49:12.930867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.865 [2024-11-26 19:49:13.653315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.865 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.126 Malloc1 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.126 [2024-11-26 19:49:13.802049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:13.126 { 00:11:13.126 "name": "Malloc1", 00:11:13.126 "aliases": [ 00:11:13.126 "4243ca19-309b-464f-a3a5-76dafb4ac1f2" 00:11:13.126 ], 00:11:13.126 "product_name": "Malloc disk", 00:11:13.126 "block_size": 512, 00:11:13.126 "num_blocks": 1048576, 00:11:13.126 "uuid": "4243ca19-309b-464f-a3a5-76dafb4ac1f2", 00:11:13.126 "assigned_rate_limits": { 00:11:13.126 "rw_ios_per_sec": 0, 00:11:13.126 "rw_mbytes_per_sec": 0, 00:11:13.126 "r_mbytes_per_sec": 0, 00:11:13.126 "w_mbytes_per_sec": 0 00:11:13.126 }, 00:11:13.126 "claimed": true, 00:11:13.126 "claim_type": "exclusive_write", 00:11:13.126 "zoned": false, 00:11:13.126 "supported_io_types": { 00:11:13.126 "read": true, 00:11:13.126 "write": true, 00:11:13.126 "unmap": true, 00:11:13.126 "flush": true, 00:11:13.126 "reset": true, 00:11:13.126 "nvme_admin": false, 00:11:13.126 "nvme_io": false, 00:11:13.126 "nvme_io_md": false, 00:11:13.126 "write_zeroes": true, 00:11:13.126 "zcopy": true, 00:11:13.126 "get_zone_info": false, 00:11:13.126 "zone_management": false, 00:11:13.126 "zone_append": false, 00:11:13.126 "compare": false, 00:11:13.126 "compare_and_write": false, 00:11:13.126 "abort": true, 00:11:13.126 "seek_hole": false, 00:11:13.126 "seek_data": false, 00:11:13.126 "copy": true, 00:11:13.126 "nvme_iov_md": false 00:11:13.126 }, 00:11:13.126 "memory_domains": [ 00:11:13.126 { 00:11:13.126 "dma_device_id": "system", 00:11:13.126 "dma_device_type": 1 00:11:13.126 }, 00:11:13.126 { 00:11:13.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.126 "dma_device_type": 2 00:11:13.126 } 00:11:13.126 ], 00:11:13.126 "driver_specific": {} 00:11:13.126 } 00:11:13.126 ]' 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:13.126 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.045 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.045 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:15.045 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.045 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:15.045 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:16.958 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:16.959 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:16.959 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:16.959 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.219 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:17.803 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.747 ************************************ 00:11:18.747 START TEST filesystem_in_capsule_ext4 00:11:18.747 ************************************ 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:18.747 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:18.747 mke2fs 1.47.0 (5-Feb-2023) 00:11:18.747 Discarding device blocks: 0/522240 done 00:11:18.747 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:18.747 Filesystem UUID: ff481a6b-09bd-44a4-a370-281cfd3921d2 00:11:18.747 Superblock backups stored on blocks: 00:11:18.747 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:18.747 00:11:18.747 Allocating group tables: 0/64 done 00:11:18.747 Writing inode tables: 0/64 done 00:11:20.134 Creating journal (8192 blocks): done 00:11:22.351 Writing superblocks and filesystem accounting information: 0/64 done 00:11:22.351 00:11:22.351 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:22.351 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3539879 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.937 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.937 00:11:28.937 real 0m9.662s 00:11:28.937 user 0m0.028s 00:11:28.937 sys 0m0.080s 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 ************************************ 00:11:28.938 END TEST filesystem_in_capsule_ext4 00:11:28.938 ************************************ 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.938 ************************************ 00:11:28.938 START TEST filesystem_in_capsule_btrfs 00:11:28.938 ************************************ 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:28.938 btrfs-progs v6.8.1 00:11:28.938 See https://btrfs.readthedocs.io for more information. 00:11:28.938 00:11:28.938 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:28.938 NOTE: several default settings have changed in version 5.15, please make sure 00:11:28.938 this does not affect your deployments: 00:11:28.938 - DUP for metadata (-m dup) 00:11:28.938 - enabled no-holes (-O no-holes) 00:11:28.938 - enabled free-space-tree (-R free-space-tree) 00:11:28.938 00:11:28.938 Label: (null) 00:11:28.938 UUID: 160803b3-2f01-4962-9cdf-1de96964b76b 00:11:28.938 Node size: 16384 00:11:28.938 Sector size: 4096 (CPU page size: 4096) 00:11:28.938 Filesystem size: 510.00MiB 00:11:28.938 Block group profiles: 00:11:28.938 Data: single 8.00MiB 00:11:28.938 Metadata: DUP 32.00MiB 00:11:28.938 System: DUP 8.00MiB 00:11:28.938 SSD detected: yes 00:11:28.938 Zoned device: no 00:11:28.938 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:28.938 Checksum: crc32c 00:11:28.938 Number of devices: 1 00:11:28.938 Devices: 00:11:28.938 ID SIZE PATH 00:11:28.938 1 510.00MiB /dev/nvme0n1p1 00:11:28.938 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.938 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.508 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.768 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3539879 00:11:29.768 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.769 00:11:29.769 real 0m1.142s 00:11:29.769 user 0m0.034s 00:11:29.769 sys 0m0.115s 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.769 ************************************ 00:11:29.769 END TEST filesystem_in_capsule_btrfs 00:11:29.769 ************************************ 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.769 ************************************ 00:11:29.769 START TEST filesystem_in_capsule_xfs 00:11:29.769 ************************************ 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.769 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.769 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.769 = sectsz=512 attr=2, projid32bit=1 00:11:29.769 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.769 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.769 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.769 = sunit=0 swidth=0 blks 00:11:29.769 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.769 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.769 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.769 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:31.152 Discarding blocks...Done. 00:11:31.152 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.152 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:33.697 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.697 00:11:33.697 real 0m3.613s 00:11:33.697 user 0m0.025s 00:11:33.697 sys 0m0.079s 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.697 ************************************ 00:11:33.697 END TEST filesystem_in_capsule_xfs 00:11:33.697 ************************************ 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3539879 ']' 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3539879' 00:11:33.697 killing process with pid 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3539879 00:11:33.697 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3539879 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.959 00:11:33.959 real 0m21.769s 00:11:33.959 user 1m26.159s 00:11:33.959 sys 0m1.478s 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.959 ************************************ 00:11:33.959 END TEST nvmf_filesystem_in_capsule 00:11:33.959 ************************************ 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.959 rmmod nvme_tcp 00:11:33.959 rmmod nvme_fabrics 00:11:33.959 rmmod nvme_keyring 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.959 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.505 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:36.505 00:11:36.505 real 0m55.433s 00:11:36.505 user 3m0.481s 00:11:36.505 sys 0m9.085s 00:11:36.505 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.506 ************************************ 00:11:36.506 END TEST nvmf_filesystem 00:11:36.506 ************************************ 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.506 ************************************ 00:11:36.506 START TEST nvmf_target_discovery 00:11:36.506 ************************************ 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:36.506 * Looking for test storage... 00:11:36.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:36.506 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.506 --rc genhtml_branch_coverage=1 00:11:36.506 --rc genhtml_function_coverage=1 00:11:36.506 --rc genhtml_legend=1 00:11:36.506 --rc geninfo_all_blocks=1 00:11:36.506 --rc geninfo_unexecuted_blocks=1 00:11:36.506 00:11:36.506 ' 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.506 --rc genhtml_branch_coverage=1 00:11:36.506 --rc genhtml_function_coverage=1 00:11:36.506 --rc genhtml_legend=1 00:11:36.506 --rc geninfo_all_blocks=1 00:11:36.506 --rc geninfo_unexecuted_blocks=1 00:11:36.506 00:11:36.506 ' 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.506 --rc genhtml_branch_coverage=1 00:11:36.506 --rc genhtml_function_coverage=1 00:11:36.506 --rc genhtml_legend=1 00:11:36.506 --rc geninfo_all_blocks=1 00:11:36.506 --rc geninfo_unexecuted_blocks=1 00:11:36.506 00:11:36.506 ' 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.506 --rc genhtml_branch_coverage=1 00:11:36.506 --rc genhtml_function_coverage=1 00:11:36.506 --rc genhtml_legend=1 00:11:36.506 --rc geninfo_all_blocks=1 00:11:36.506 --rc geninfo_unexecuted_blocks=1 00:11:36.506 00:11:36.506 ' 00:11:36.506 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.507 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:36.508 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:36.508 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.508 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:44.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:44.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.867 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:44.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:44.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:11:44.868 00:11:44.868 --- 10.0.0.2 ping statistics --- 00:11:44.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.868 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:11:44.868 00:11:44.868 --- 10.0.0.1 ping statistics --- 00:11:44.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.868 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3548481 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3548481 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3548481 ']' 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.868 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.868 [2024-11-26 19:49:44.611287] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:11:44.868 [2024-11-26 19:49:44.611358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.868 [2024-11-26 19:49:44.710663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.868 [2024-11-26 19:49:44.763452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.869 [2024-11-26 19:49:44.763504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.869 [2024-11-26 19:49:44.763513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.869 [2024-11-26 19:49:44.763521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.869 [2024-11-26 19:49:44.763527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.869 [2024-11-26 19:49:44.765774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.869 [2024-11-26 19:49:44.765935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.869 [2024-11-26 19:49:44.766097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.869 [2024-11-26 19:49:44.766097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 [2024-11-26 19:49:45.495219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 Null1 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 [2024-11-26 19:49:45.565408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 Null2 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 Null3 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.869 Null4 00:11:44.869 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.870 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:44.870 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.870 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.131 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:45.392 00:11:45.392 Discovery Log Number of Records 6, Generation counter 6 00:11:45.392 =====Discovery Log Entry 0====== 00:11:45.392 trtype: tcp 00:11:45.392 adrfam: ipv4 00:11:45.392 subtype: current discovery subsystem 00:11:45.392 treq: not required 00:11:45.392 portid: 0 00:11:45.392 trsvcid: 4420 00:11:45.392 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:45.392 traddr: 10.0.0.2 00:11:45.392 eflags: explicit discovery connections, duplicate discovery information 00:11:45.392 sectype: none 00:11:45.392 =====Discovery Log Entry 1====== 00:11:45.392 trtype: tcp 00:11:45.392 adrfam: ipv4 00:11:45.392 subtype: nvme subsystem 00:11:45.392 treq: not required 00:11:45.392 portid: 0 00:11:45.392 trsvcid: 4420 00:11:45.392 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:45.392 traddr: 10.0.0.2 00:11:45.392 eflags: none 00:11:45.392 sectype: none 00:11:45.392 =====Discovery Log Entry 2====== 00:11:45.392 trtype: tcp 00:11:45.392 adrfam: ipv4 00:11:45.392 subtype: nvme subsystem 00:11:45.392 treq: not required 00:11:45.392 portid: 0 00:11:45.392 trsvcid: 4420 00:11:45.392 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:45.392 traddr: 10.0.0.2 00:11:45.392 eflags: none 00:11:45.392 sectype: none 00:11:45.392 =====Discovery Log Entry 3====== 00:11:45.392 trtype: tcp 00:11:45.392 adrfam: ipv4 00:11:45.393 subtype: nvme subsystem 00:11:45.393 treq: not required 00:11:45.393 portid: 0 00:11:45.393 trsvcid: 4420 00:11:45.393 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:45.393 traddr: 10.0.0.2 00:11:45.393 eflags: none 00:11:45.393 sectype: none 00:11:45.393 =====Discovery Log Entry 4====== 00:11:45.393 trtype: tcp 00:11:45.393 adrfam: ipv4 00:11:45.393 subtype: nvme subsystem 00:11:45.393 treq: not required 00:11:45.393 portid: 0 00:11:45.393 trsvcid: 4420 00:11:45.393 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:45.393 traddr: 10.0.0.2 00:11:45.393 eflags: none 00:11:45.393 sectype: none 00:11:45.393 =====Discovery Log Entry 5====== 00:11:45.393 trtype: tcp 00:11:45.393 adrfam: ipv4 00:11:45.393 subtype: discovery subsystem referral 00:11:45.393 treq: not required 00:11:45.393 portid: 0 00:11:45.393 trsvcid: 4430 00:11:45.393 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:45.393 traddr: 10.0.0.2 00:11:45.393 eflags: none 00:11:45.393 sectype: none 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:45.393 Perform nvmf subsystem discovery via RPC 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.393 [ 00:11:45.393 { 00:11:45.393 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:45.393 "subtype": "Discovery", 00:11:45.393 "listen_addresses": [ 00:11:45.393 { 00:11:45.393 "trtype": "TCP", 00:11:45.393 "adrfam": "IPv4", 00:11:45.393 "traddr": "10.0.0.2", 00:11:45.393 "trsvcid": "4420" 00:11:45.393 } 00:11:45.393 ], 00:11:45.393 "allow_any_host": true, 00:11:45.393 "hosts": [] 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:45.393 "subtype": "NVMe", 00:11:45.393 "listen_addresses": [ 00:11:45.393 { 00:11:45.393 "trtype": "TCP", 00:11:45.393 "adrfam": "IPv4", 00:11:45.393 "traddr": "10.0.0.2", 00:11:45.393 "trsvcid": "4420" 00:11:45.393 } 00:11:45.393 ], 00:11:45.393 "allow_any_host": true, 00:11:45.393 "hosts": [], 00:11:45.393 "serial_number": "SPDK00000000000001", 00:11:45.393 "model_number": "SPDK bdev Controller", 00:11:45.393 "max_namespaces": 32, 00:11:45.393 "min_cntlid": 1, 00:11:45.393 "max_cntlid": 65519, 00:11:45.393 "namespaces": [ 00:11:45.393 { 00:11:45.393 "nsid": 1, 00:11:45.393 "bdev_name": "Null1", 00:11:45.393 "name": "Null1", 00:11:45.393 "nguid": "BB0FD0EC94F74007A6D622E422EB020F", 00:11:45.393 "uuid": "bb0fd0ec-94f7-4007-a6d6-22e422eb020f" 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:45.393 "subtype": "NVMe", 00:11:45.393 "listen_addresses": [ 00:11:45.393 { 00:11:45.393 "trtype": "TCP", 00:11:45.393 "adrfam": "IPv4", 00:11:45.393 "traddr": "10.0.0.2", 00:11:45.393 "trsvcid": "4420" 00:11:45.393 } 00:11:45.393 ], 00:11:45.393 "allow_any_host": true, 00:11:45.393 "hosts": [], 00:11:45.393 "serial_number": "SPDK00000000000002", 00:11:45.393 "model_number": "SPDK bdev Controller", 00:11:45.393 "max_namespaces": 32, 00:11:45.393 "min_cntlid": 1, 00:11:45.393 "max_cntlid": 65519, 00:11:45.393 "namespaces": [ 00:11:45.393 { 00:11:45.393 "nsid": 1, 00:11:45.393 "bdev_name": "Null2", 00:11:45.393 "name": "Null2", 00:11:45.393 "nguid": "2D77ABE806AE4EEC8A47FE3B9BB1323E", 00:11:45.393 "uuid": "2d77abe8-06ae-4eec-8a47-fe3b9bb1323e" 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:45.393 "subtype": "NVMe", 00:11:45.393 "listen_addresses": [ 00:11:45.393 { 00:11:45.393 "trtype": "TCP", 00:11:45.393 "adrfam": "IPv4", 00:11:45.393 "traddr": "10.0.0.2", 00:11:45.393 "trsvcid": "4420" 00:11:45.393 } 00:11:45.393 ], 00:11:45.393 "allow_any_host": true, 00:11:45.393 "hosts": [], 00:11:45.393 "serial_number": "SPDK00000000000003", 00:11:45.393 "model_number": "SPDK bdev Controller", 00:11:45.393 "max_namespaces": 32, 00:11:45.393 "min_cntlid": 1, 00:11:45.393 "max_cntlid": 65519, 00:11:45.393 "namespaces": [ 00:11:45.393 { 00:11:45.393 "nsid": 1, 00:11:45.393 "bdev_name": "Null3", 00:11:45.393 "name": "Null3", 00:11:45.393 "nguid": "E7EFC97E993C45D7B9446CABBA18210D", 00:11:45.393 "uuid": "e7efc97e-993c-45d7-b944-6cabba18210d" 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 }, 00:11:45.393 { 00:11:45.393 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:45.393 "subtype": "NVMe", 00:11:45.393 "listen_addresses": [ 00:11:45.393 { 00:11:45.393 "trtype": "TCP", 00:11:45.393 "adrfam": "IPv4", 00:11:45.393 "traddr": "10.0.0.2", 00:11:45.393 "trsvcid": "4420" 00:11:45.393 } 00:11:45.393 ], 00:11:45.393 "allow_any_host": true, 00:11:45.393 "hosts": [], 00:11:45.393 "serial_number": "SPDK00000000000004", 00:11:45.393 "model_number": "SPDK bdev Controller", 00:11:45.393 "max_namespaces": 32, 00:11:45.393 "min_cntlid": 1, 00:11:45.393 "max_cntlid": 65519, 00:11:45.393 "namespaces": [ 00:11:45.393 { 00:11:45.393 "nsid": 1, 00:11:45.393 "bdev_name": "Null4", 00:11:45.393 "name": "Null4", 00:11:45.393 "nguid": "493B8F7A57054117B86EEECB8EDC7E48", 00:11:45.393 "uuid": "493b8f7a-5705-4117-b86e-eecb8edc7e48" 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 } 00:11:45.393 ] 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.393 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.393 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.394 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.394 rmmod nvme_tcp 00:11:45.394 rmmod nvme_fabrics 00:11:45.394 rmmod nvme_keyring 00:11:45.655 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3548481 ']' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3548481 ']' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3548481' 00:11:45.656 killing process with pid 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3548481 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.656 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.207 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.208 00:11:48.208 real 0m11.739s 00:11:48.208 user 0m9.114s 00:11:48.208 sys 0m6.139s 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.208 ************************************ 00:11:48.208 END TEST nvmf_target_discovery 00:11:48.208 ************************************ 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.208 ************************************ 00:11:48.208 START TEST nvmf_referrals 00:11:48.208 ************************************ 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:48.208 * Looking for test storage... 00:11:48.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.208 --rc genhtml_branch_coverage=1 00:11:48.208 --rc genhtml_function_coverage=1 00:11:48.208 --rc genhtml_legend=1 00:11:48.208 --rc geninfo_all_blocks=1 00:11:48.208 --rc geninfo_unexecuted_blocks=1 00:11:48.208 00:11:48.208 ' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.208 --rc genhtml_branch_coverage=1 00:11:48.208 --rc genhtml_function_coverage=1 00:11:48.208 --rc genhtml_legend=1 00:11:48.208 --rc geninfo_all_blocks=1 00:11:48.208 --rc geninfo_unexecuted_blocks=1 00:11:48.208 00:11:48.208 ' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.208 --rc genhtml_branch_coverage=1 00:11:48.208 --rc genhtml_function_coverage=1 00:11:48.208 --rc genhtml_legend=1 00:11:48.208 --rc geninfo_all_blocks=1 00:11:48.208 --rc geninfo_unexecuted_blocks=1 00:11:48.208 00:11:48.208 ' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.208 --rc genhtml_branch_coverage=1 00:11:48.208 --rc genhtml_function_coverage=1 00:11:48.208 --rc genhtml_legend=1 00:11:48.208 --rc geninfo_all_blocks=1 00:11:48.208 --rc geninfo_unexecuted_blocks=1 00:11:48.208 00:11:48.208 ' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.208 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.209 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.352 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.352 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.352 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.352 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.352 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:11:56.353 00:11:56.353 --- 10.0.0.2 ping statistics --- 00:11:56.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.353 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:11:56.353 00:11:56.353 --- 10.0.0.1 ping statistics --- 00:11:56.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.353 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3553167 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3553167 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3553167 ']' 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.353 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.353 [2024-11-26 19:49:56.478773] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:11:56.353 [2024-11-26 19:49:56.478843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.353 [2024-11-26 19:49:56.580260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.353 [2024-11-26 19:49:56.633710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.353 [2024-11-26 19:49:56.633761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.353 [2024-11-26 19:49:56.633770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.353 [2024-11-26 19:49:56.633777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.353 [2024-11-26 19:49:56.633783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.353 [2024-11-26 19:49:56.636217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.353 [2024-11-26 19:49:56.636648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.353 [2024-11-26 19:49:56.636809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.353 [2024-11-26 19:49:56.636809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 [2024-11-26 19:49:57.351013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 [2024-11-26 19:49:57.375427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.878 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.139 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.140 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.140 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.140 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.402 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.402 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.663 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.924 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.186 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.454 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.715 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.976 rmmod nvme_tcp 00:11:58.976 rmmod nvme_fabrics 00:11:58.976 rmmod nvme_keyring 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3553167 ']' 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3553167 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3553167 ']' 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3553167 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3553167 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3553167' 00:11:58.976 killing process with pid 3553167 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3553167 00:11:58.976 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3553167 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.237 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.173 00:12:01.173 real 0m13.281s 00:12:01.173 user 0m15.898s 00:12:01.173 sys 0m6.570s 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 ************************************ 00:12:01.173 END TEST nvmf_referrals 00:12:01.173 ************************************ 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.173 19:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.436 ************************************ 00:12:01.436 START TEST nvmf_connect_disconnect 00:12:01.436 ************************************ 00:12:01.436 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:01.436 * Looking for test storage... 00:12:01.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.436 --rc genhtml_branch_coverage=1 00:12:01.436 --rc genhtml_function_coverage=1 00:12:01.436 --rc genhtml_legend=1 00:12:01.436 --rc geninfo_all_blocks=1 00:12:01.436 --rc geninfo_unexecuted_blocks=1 00:12:01.436 00:12:01.436 ' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.436 --rc genhtml_branch_coverage=1 00:12:01.436 --rc genhtml_function_coverage=1 00:12:01.436 --rc genhtml_legend=1 00:12:01.436 --rc geninfo_all_blocks=1 00:12:01.436 --rc geninfo_unexecuted_blocks=1 00:12:01.436 00:12:01.436 ' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.436 --rc genhtml_branch_coverage=1 00:12:01.436 --rc genhtml_function_coverage=1 00:12:01.436 --rc genhtml_legend=1 00:12:01.436 --rc geninfo_all_blocks=1 00:12:01.436 --rc geninfo_unexecuted_blocks=1 00:12:01.436 00:12:01.436 ' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.436 --rc genhtml_branch_coverage=1 00:12:01.436 --rc genhtml_function_coverage=1 00:12:01.436 --rc genhtml_legend=1 00:12:01.436 --rc geninfo_all_blocks=1 00:12:01.436 --rc geninfo_unexecuted_blocks=1 00:12:01.436 00:12:01.436 ' 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.436 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.437 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.578 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:09.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:09.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:09.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:09.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:12:09.579 00:12:09.579 --- 10.0.0.2 ping statistics --- 00:12:09.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.579 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:12:09.579 00:12:09.579 --- 10.0.0.1 ping statistics --- 00:12:09.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.579 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3557954 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3557954 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3557954 ']' 00:12:09.579 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.580 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.580 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.580 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.580 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.580 [2024-11-26 19:50:09.794417] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:12:09.580 [2024-11-26 19:50:09.794496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.580 [2024-11-26 19:50:09.896820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.580 [2024-11-26 19:50:09.950689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.580 [2024-11-26 19:50:09.950736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.580 [2024-11-26 19:50:09.950744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.580 [2024-11-26 19:50:09.950751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.580 [2024-11-26 19:50:09.950758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.580 [2024-11-26 19:50:09.952828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.580 [2024-11-26 19:50:09.952982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.580 [2024-11-26 19:50:09.953143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.580 [2024-11-26 19:50:09.953144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.841 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.841 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:09.841 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.841 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.841 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 [2024-11-26 19:50:10.672124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:10.103 [2024-11-26 19:50:10.755796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:10.103 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:14.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.436 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:28.436 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:28.436 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.436 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.436 rmmod nvme_tcp 00:12:28.436 rmmod nvme_fabrics 00:12:28.436 rmmod nvme_keyring 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3557954 ']' 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3557954 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3557954 ']' 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3557954 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3557954 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3557954' 00:12:28.436 killing process with pid 3557954 00:12:28.436 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3557954 00:12:28.437 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3557954 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.697 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.608 00:12:30.608 real 0m29.346s 00:12:30.608 user 1m18.959s 00:12:30.608 sys 0m7.226s 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.608 ************************************ 00:12:30.608 END TEST nvmf_connect_disconnect 00:12:30.608 ************************************ 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.608 19:50:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.869 ************************************ 00:12:30.869 START TEST nvmf_multitarget 00:12:30.869 ************************************ 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.869 * Looking for test storage... 00:12:30.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.869 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:30.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.870 --rc genhtml_branch_coverage=1 00:12:30.870 --rc genhtml_function_coverage=1 00:12:30.870 --rc genhtml_legend=1 00:12:30.870 --rc geninfo_all_blocks=1 00:12:30.870 --rc geninfo_unexecuted_blocks=1 00:12:30.870 00:12:30.870 ' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:30.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.870 --rc genhtml_branch_coverage=1 00:12:30.870 --rc genhtml_function_coverage=1 00:12:30.870 --rc genhtml_legend=1 00:12:30.870 --rc geninfo_all_blocks=1 00:12:30.870 --rc geninfo_unexecuted_blocks=1 00:12:30.870 00:12:30.870 ' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:30.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.870 --rc genhtml_branch_coverage=1 00:12:30.870 --rc genhtml_function_coverage=1 00:12:30.870 --rc genhtml_legend=1 00:12:30.870 --rc geninfo_all_blocks=1 00:12:30.870 --rc geninfo_unexecuted_blocks=1 00:12:30.870 00:12:30.870 ' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:30.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.870 --rc genhtml_branch_coverage=1 00:12:30.870 --rc genhtml_function_coverage=1 00:12:30.870 --rc genhtml_legend=1 00:12:30.870 --rc geninfo_all_blocks=1 00:12:30.870 --rc geninfo_unexecuted_blocks=1 00:12:30.870 00:12:30.870 ' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.870 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.013 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.013 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.013 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.014 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:12:39.014 00:12:39.014 --- 10.0.0.2 ping statistics --- 00:12:39.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.014 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:12:39.014 00:12:39.014 --- 10.0.0.1 ping statistics --- 00:12:39.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.014 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3566076 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3566076 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3566076 ']' 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.014 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.014 [2024-11-26 19:50:39.272564] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:12:39.014 [2024-11-26 19:50:39.272628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.014 [2024-11-26 19:50:39.374424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.014 [2024-11-26 19:50:39.427988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.014 [2024-11-26 19:50:39.428044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.014 [2024-11-26 19:50:39.428053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.014 [2024-11-26 19:50:39.428061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.014 [2024-11-26 19:50:39.428067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.014 [2024-11-26 19:50:39.430459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.014 [2024-11-26 19:50:39.430621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.014 [2024-11-26 19:50:39.430783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.014 [2024-11-26 19:50:39.430783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:39.586 "nvmf_tgt_1" 00:12:39.586 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:39.850 "nvmf_tgt_2" 00:12:39.850 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.850 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:39.850 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:39.850 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:40.111 true 00:12:40.111 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:40.111 true 00:12:40.111 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.111 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.371 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.371 rmmod nvme_tcp 00:12:40.371 rmmod nvme_fabrics 00:12:40.371 rmmod nvme_keyring 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3566076 ']' 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3566076 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3566076 ']' 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3566076 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3566076 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3566076' 00:12:40.371 killing process with pid 3566076 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3566076 00:12:40.371 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3566076 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.632 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.542 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.542 00:12:42.542 real 0m11.899s 00:12:42.542 user 0m10.259s 00:12:42.542 sys 0m6.244s 00:12:42.542 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.542 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.542 ************************************ 00:12:42.542 END TEST nvmf_multitarget 00:12:42.542 ************************************ 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.802 ************************************ 00:12:42.802 START TEST nvmf_rpc 00:12:42.802 ************************************ 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.802 * Looking for test storage... 00:12:42.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.802 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.803 --rc genhtml_branch_coverage=1 00:12:42.803 --rc genhtml_function_coverage=1 00:12:42.803 --rc genhtml_legend=1 00:12:42.803 --rc geninfo_all_blocks=1 00:12:42.803 --rc geninfo_unexecuted_blocks=1 00:12:42.803 00:12:42.803 ' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.803 --rc genhtml_branch_coverage=1 00:12:42.803 --rc genhtml_function_coverage=1 00:12:42.803 --rc genhtml_legend=1 00:12:42.803 --rc geninfo_all_blocks=1 00:12:42.803 --rc geninfo_unexecuted_blocks=1 00:12:42.803 00:12:42.803 ' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.803 --rc genhtml_branch_coverage=1 00:12:42.803 --rc genhtml_function_coverage=1 00:12:42.803 --rc genhtml_legend=1 00:12:42.803 --rc geninfo_all_blocks=1 00:12:42.803 --rc geninfo_unexecuted_blocks=1 00:12:42.803 00:12:42.803 ' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.803 --rc genhtml_branch_coverage=1 00:12:42.803 --rc genhtml_function_coverage=1 00:12:42.803 --rc genhtml_legend=1 00:12:42.803 --rc geninfo_all_blocks=1 00:12:42.803 --rc geninfo_unexecuted_blocks=1 00:12:42.803 00:12:42.803 ' 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.803 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:43.064 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.065 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:51.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:51.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.204 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:51.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:51.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.205 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:12:51.205 00:12:51.205 --- 10.0.0.2 ping statistics --- 00:12:51.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.205 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:12:51.205 00:12:51.205 --- 10.0.0.1 ping statistics --- 00:12:51.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.205 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3570771 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3570771 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3570771 ']' 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.205 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.205 [2024-11-26 19:50:51.295940] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:12:51.205 [2024-11-26 19:50:51.296001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.205 [2024-11-26 19:50:51.395927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.205 [2024-11-26 19:50:51.448470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.205 [2024-11-26 19:50:51.448531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.205 [2024-11-26 19:50:51.448539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.205 [2024-11-26 19:50:51.448547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.205 [2024-11-26 19:50:51.448553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.205 [2024-11-26 19:50:51.450631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.205 [2024-11-26 19:50:51.450797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.205 [2024-11-26 19:50:51.450959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.205 [2024-11-26 19:50:51.450960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.467 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:51.468 "tick_rate": 2400000000, 00:12:51.468 "poll_groups": [ 00:12:51.468 { 00:12:51.468 "name": "nvmf_tgt_poll_group_000", 00:12:51.468 "admin_qpairs": 0, 00:12:51.468 "io_qpairs": 0, 00:12:51.468 "current_admin_qpairs": 0, 00:12:51.468 "current_io_qpairs": 0, 00:12:51.468 "pending_bdev_io": 0, 00:12:51.468 "completed_nvme_io": 0, 00:12:51.468 "transports": [] 00:12:51.468 }, 00:12:51.468 { 00:12:51.468 "name": "nvmf_tgt_poll_group_001", 00:12:51.468 "admin_qpairs": 0, 00:12:51.468 "io_qpairs": 0, 00:12:51.468 "current_admin_qpairs": 0, 00:12:51.468 "current_io_qpairs": 0, 00:12:51.468 "pending_bdev_io": 0, 00:12:51.468 "completed_nvme_io": 0, 00:12:51.468 "transports": [] 00:12:51.468 }, 00:12:51.468 { 00:12:51.468 "name": "nvmf_tgt_poll_group_002", 00:12:51.468 "admin_qpairs": 0, 00:12:51.468 "io_qpairs": 0, 00:12:51.468 "current_admin_qpairs": 0, 00:12:51.468 "current_io_qpairs": 0, 00:12:51.468 "pending_bdev_io": 0, 00:12:51.468 "completed_nvme_io": 0, 00:12:51.468 "transports": [] 00:12:51.468 }, 00:12:51.468 { 00:12:51.468 "name": "nvmf_tgt_poll_group_003", 00:12:51.468 "admin_qpairs": 0, 00:12:51.468 "io_qpairs": 0, 00:12:51.468 "current_admin_qpairs": 0, 00:12:51.468 "current_io_qpairs": 0, 00:12:51.468 "pending_bdev_io": 0, 00:12:51.468 "completed_nvme_io": 0, 00:12:51.468 "transports": [] 00:12:51.468 } 00:12:51.468 ] 00:12:51.468 }' 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:51.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.730 [2024-11-26 19:50:52.292933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:51.730 "tick_rate": 2400000000, 00:12:51.730 "poll_groups": [ 00:12:51.730 { 00:12:51.730 "name": "nvmf_tgt_poll_group_000", 00:12:51.730 "admin_qpairs": 0, 00:12:51.730 "io_qpairs": 0, 00:12:51.730 "current_admin_qpairs": 0, 00:12:51.730 "current_io_qpairs": 0, 00:12:51.730 "pending_bdev_io": 0, 00:12:51.730 "completed_nvme_io": 0, 00:12:51.730 "transports": [ 00:12:51.730 { 00:12:51.730 "trtype": "TCP" 00:12:51.730 } 00:12:51.730 ] 00:12:51.730 }, 00:12:51.730 { 00:12:51.730 "name": "nvmf_tgt_poll_group_001", 00:12:51.730 "admin_qpairs": 0, 00:12:51.730 "io_qpairs": 0, 00:12:51.730 "current_admin_qpairs": 0, 00:12:51.730 "current_io_qpairs": 0, 00:12:51.730 "pending_bdev_io": 0, 00:12:51.730 "completed_nvme_io": 0, 00:12:51.730 "transports": [ 00:12:51.730 { 00:12:51.730 "trtype": "TCP" 00:12:51.730 } 00:12:51.730 ] 00:12:51.730 }, 00:12:51.730 { 00:12:51.730 "name": "nvmf_tgt_poll_group_002", 00:12:51.730 "admin_qpairs": 0, 00:12:51.730 "io_qpairs": 0, 00:12:51.730 "current_admin_qpairs": 0, 00:12:51.730 "current_io_qpairs": 0, 00:12:51.730 "pending_bdev_io": 0, 00:12:51.730 "completed_nvme_io": 0, 00:12:51.730 "transports": [ 00:12:51.730 { 00:12:51.730 "trtype": "TCP" 00:12:51.730 } 00:12:51.730 ] 00:12:51.730 }, 00:12:51.730 { 00:12:51.730 "name": "nvmf_tgt_poll_group_003", 00:12:51.730 "admin_qpairs": 0, 00:12:51.730 "io_qpairs": 0, 00:12:51.730 "current_admin_qpairs": 0, 00:12:51.730 "current_io_qpairs": 0, 00:12:51.730 "pending_bdev_io": 0, 00:12:51.730 "completed_nvme_io": 0, 00:12:51.730 "transports": [ 00:12:51.730 { 00:12:51.730 "trtype": "TCP" 00:12:51.730 } 00:12:51.730 ] 00:12:51.730 } 00:12:51.730 ] 00:12:51.730 }' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.730 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 Malloc1 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 [2024-11-26 19:50:52.503543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:51.731 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:51.731 [2024-11-26 19:50:52.540747] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:51.992 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.992 could not add new controller: failed to write to nvme-fabrics device 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.992 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.993 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.993 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.993 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.375 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.375 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.375 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.375 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.375 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:55.918 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.919 [2024-11-26 19:50:56.315972] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:55.919 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.919 could not add new controller: failed to write to nvme-fabrics device 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.919 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.304 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.304 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:57.304 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.304 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:57.304 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:59.219 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.219 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.481 [2024-11-26 19:51:00.087827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.481 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.866 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.866 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.866 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.866 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.866 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 [2024-11-26 19:51:03.843073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.409 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.793 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.793 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.793 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.793 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.793 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.707 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.968 [2024-11-26 19:51:07.549644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.968 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.354 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.354 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.354 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.354 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.354 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.270 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:10.532 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 [2024-11-26 19:51:11.278092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.533 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.447 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.447 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.447 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.447 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.447 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.361 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.361 [2024-11-26 19:51:15.072713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.361 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.362 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.280 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.280 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.280 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.280 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.280 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.198 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 [2024-11-26 19:51:18.886200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 [2024-11-26 19:51:18.954361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.199 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.199 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.199 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.199 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 [2024-11-26 19:51:19.022557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 [2024-11-26 19:51:19.094799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.460 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 [2024-11-26 19:51:19.163025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:18.461 "tick_rate": 2400000000, 00:13:18.461 "poll_groups": [ 00:13:18.461 { 00:13:18.461 "name": "nvmf_tgt_poll_group_000", 00:13:18.461 "admin_qpairs": 0, 00:13:18.461 "io_qpairs": 224, 00:13:18.461 "current_admin_qpairs": 0, 00:13:18.461 "current_io_qpairs": 0, 00:13:18.461 "pending_bdev_io": 0, 00:13:18.461 "completed_nvme_io": 225, 00:13:18.461 "transports": [ 00:13:18.461 { 00:13:18.461 "trtype": "TCP" 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 }, 00:13:18.461 { 00:13:18.461 "name": "nvmf_tgt_poll_group_001", 00:13:18.461 "admin_qpairs": 1, 00:13:18.461 "io_qpairs": 223, 00:13:18.461 "current_admin_qpairs": 0, 00:13:18.461 "current_io_qpairs": 0, 00:13:18.461 "pending_bdev_io": 0, 00:13:18.461 "completed_nvme_io": 256, 00:13:18.461 "transports": [ 00:13:18.461 { 00:13:18.461 "trtype": "TCP" 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 }, 00:13:18.461 { 00:13:18.461 "name": "nvmf_tgt_poll_group_002", 00:13:18.461 "admin_qpairs": 6, 00:13:18.461 "io_qpairs": 218, 00:13:18.461 "current_admin_qpairs": 0, 00:13:18.461 "current_io_qpairs": 0, 00:13:18.461 "pending_bdev_io": 0, 00:13:18.461 "completed_nvme_io": 517, 00:13:18.461 "transports": [ 00:13:18.461 { 00:13:18.461 "trtype": "TCP" 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 }, 00:13:18.461 { 00:13:18.461 "name": "nvmf_tgt_poll_group_003", 00:13:18.461 "admin_qpairs": 0, 00:13:18.461 "io_qpairs": 224, 00:13:18.461 "current_admin_qpairs": 0, 00:13:18.461 "current_io_qpairs": 0, 00:13:18.461 "pending_bdev_io": 0, 00:13:18.461 "completed_nvme_io": 241, 00:13:18.461 "transports": [ 00:13:18.461 { 00:13:18.461 "trtype": "TCP" 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 } 00:13:18.461 ] 00:13:18.461 }' 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.461 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.721 rmmod nvme_tcp 00:13:18.721 rmmod nvme_fabrics 00:13:18.721 rmmod nvme_keyring 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3570771 ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3570771 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3570771 ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3570771 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3570771 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3570771' 00:13:18.721 killing process with pid 3570771 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3570771 00:13:18.721 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3570771 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.982 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.962 00:13:20.962 real 0m38.250s 00:13:20.962 user 1m54.500s 00:13:20.962 sys 0m7.970s 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.962 ************************************ 00:13:20.962 END TEST nvmf_rpc 00:13:20.962 ************************************ 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.962 ************************************ 00:13:20.962 START TEST nvmf_invalid 00:13:20.962 ************************************ 00:13:20.962 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.262 * Looking for test storage... 00:13:21.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.262 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.263 --rc genhtml_branch_coverage=1 00:13:21.263 --rc genhtml_function_coverage=1 00:13:21.263 --rc genhtml_legend=1 00:13:21.263 --rc geninfo_all_blocks=1 00:13:21.263 --rc geninfo_unexecuted_blocks=1 00:13:21.263 00:13:21.263 ' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.263 --rc genhtml_branch_coverage=1 00:13:21.263 --rc genhtml_function_coverage=1 00:13:21.263 --rc genhtml_legend=1 00:13:21.263 --rc geninfo_all_blocks=1 00:13:21.263 --rc geninfo_unexecuted_blocks=1 00:13:21.263 00:13:21.263 ' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.263 --rc genhtml_branch_coverage=1 00:13:21.263 --rc genhtml_function_coverage=1 00:13:21.263 --rc genhtml_legend=1 00:13:21.263 --rc geninfo_all_blocks=1 00:13:21.263 --rc geninfo_unexecuted_blocks=1 00:13:21.263 00:13:21.263 ' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.263 --rc genhtml_branch_coverage=1 00:13:21.263 --rc genhtml_function_coverage=1 00:13:21.263 --rc genhtml_legend=1 00:13:21.263 --rc geninfo_all_blocks=1 00:13:21.263 --rc geninfo_unexecuted_blocks=1 00:13:21.263 00:13:21.263 ' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.263 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.264 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.496 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.496 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:29.497 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:29.497 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:29.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:29.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.497 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:13:29.497 00:13:29.497 --- 10.0.0.2 ping statistics --- 00:13:29.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.497 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:13:29.497 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:13:29.497 00:13:29.497 --- 10.0.0.1 ping statistics --- 00:13:29.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.498 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3580987 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3580987 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3580987 ']' 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.498 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.498 [2024-11-26 19:51:29.412685] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:13:29.498 [2024-11-26 19:51:29.412752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.498 [2024-11-26 19:51:29.512448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.498 [2024-11-26 19:51:29.566101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.498 [2024-11-26 19:51:29.566154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.498 [2024-11-26 19:51:29.566174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.498 [2024-11-26 19:51:29.566187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.498 [2024-11-26 19:51:29.566193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.498 [2024-11-26 19:51:29.568214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.498 [2024-11-26 19:51:29.568500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.498 [2024-11-26 19:51:29.568656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.498 [2024-11-26 19:51:29.568658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.498 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17830 00:13:29.758 [2024-11-26 19:51:30.427721] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:29.758 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:29.758 { 00:13:29.758 "nqn": "nqn.2016-06.io.spdk:cnode17830", 00:13:29.758 "tgt_name": "foobar", 00:13:29.758 "method": "nvmf_create_subsystem", 00:13:29.758 "req_id": 1 00:13:29.758 } 00:13:29.758 Got JSON-RPC error response 00:13:29.758 response: 00:13:29.758 { 00:13:29.758 "code": -32603, 00:13:29.758 "message": "Unable to find target foobar" 00:13:29.758 }' 00:13:29.758 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:29.758 { 00:13:29.758 "nqn": "nqn.2016-06.io.spdk:cnode17830", 00:13:29.758 "tgt_name": "foobar", 00:13:29.758 "method": "nvmf_create_subsystem", 00:13:29.758 "req_id": 1 00:13:29.758 } 00:13:29.758 Got JSON-RPC error response 00:13:29.758 response: 00:13:29.758 { 00:13:29.758 "code": -32603, 00:13:29.758 "message": "Unable to find target foobar" 00:13:29.758 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:29.758 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:29.758 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6100 00:13:30.019 [2024-11-26 19:51:30.620420] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6100: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.019 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:30.019 { 00:13:30.019 "nqn": "nqn.2016-06.io.spdk:cnode6100", 00:13:30.019 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.019 "method": "nvmf_create_subsystem", 00:13:30.019 "req_id": 1 00:13:30.019 } 00:13:30.019 Got JSON-RPC error response 00:13:30.019 response: 00:13:30.019 { 00:13:30.019 "code": -32602, 00:13:30.019 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.019 }' 00:13:30.019 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:30.019 { 00:13:30.019 "nqn": "nqn.2016-06.io.spdk:cnode6100", 00:13:30.019 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.019 "method": "nvmf_create_subsystem", 00:13:30.019 "req_id": 1 00:13:30.019 } 00:13:30.019 Got JSON-RPC error response 00:13:30.019 response: 00:13:30.019 { 00:13:30.019 "code": -32602, 00:13:30.019 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.019 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.019 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.019 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11757 00:13:30.019 [2024-11-26 19:51:30.804940] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11757: invalid model number 'SPDK_Controller' 00:13:30.019 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:30.019 { 00:13:30.019 "nqn": "nqn.2016-06.io.spdk:cnode11757", 00:13:30.019 "model_number": "SPDK_Controller\u001f", 00:13:30.019 "method": "nvmf_create_subsystem", 00:13:30.019 "req_id": 1 00:13:30.019 } 00:13:30.019 Got JSON-RPC error response 00:13:30.019 response: 00:13:30.019 { 00:13:30.019 "code": -32602, 00:13:30.019 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.019 }' 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:30.280 { 00:13:30.280 "nqn": "nqn.2016-06.io.spdk:cnode11757", 00:13:30.280 "model_number": "SPDK_Controller\u001f", 00:13:30.280 "method": "nvmf_create_subsystem", 00:13:30.280 "req_id": 1 00:13:30.280 } 00:13:30.280 Got JSON-RPC error response 00:13:30.280 response: 00:13:30.280 { 00:13:30.280 "code": -32602, 00:13:30.280 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.280 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:30.280 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(InOZ-9OfSs,!V"!Ce1#>' 00:13:30.281 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '(InOZ-9OfSs,!V"!Ce1#>' nqn.2016-06.io.spdk:cnode711 00:13:30.542 [2024-11-26 19:51:31.162104] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode711: invalid serial number '(InOZ-9OfSs,!V"!Ce1#>' 00:13:30.542 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:30.542 { 00:13:30.542 "nqn": "nqn.2016-06.io.spdk:cnode711", 00:13:30.542 "serial_number": "(InOZ-9OfSs,!V\"!Ce1#>", 00:13:30.542 "method": "nvmf_create_subsystem", 00:13:30.542 "req_id": 1 00:13:30.542 } 00:13:30.542 Got JSON-RPC error response 00:13:30.542 response: 00:13:30.542 { 00:13:30.542 "code": -32602, 00:13:30.542 "message": "Invalid SN (InOZ-9OfSs,!V\"!Ce1#>" 00:13:30.542 }' 00:13:30.542 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:30.542 { 00:13:30.542 "nqn": "nqn.2016-06.io.spdk:cnode711", 00:13:30.542 "serial_number": "(InOZ-9OfSs,!V\"!Ce1#>", 00:13:30.542 "method": "nvmf_create_subsystem", 00:13:30.542 "req_id": 1 00:13:30.542 } 00:13:30.542 Got JSON-RPC error response 00:13:30.542 response: 00:13:30.542 { 00:13:30.542 "code": -32602, 00:13:30.542 "message": "Invalid SN (InOZ-9OfSs,!V\"!Ce1#>" 00:13:30.543 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.543 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:30.805 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'uP$RIzEskd=g"h'\''Ip`#Q#nc*0~U4^t' 00:13:30.806 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'uP$RIzEskd=g"h'\''Ip`#Q#nc*0~U4^t' nqn.2016-06.io.spdk:cnode20613 00:13:31.067 [2024-11-26 19:51:31.671762] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20613: invalid model number 'uP$RIzEskd=g"h'Ip`#Q#nc*0~U4^t' 00:13:31.067 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:31.067 { 00:13:31.067 "nqn": "nqn.2016-06.io.spdk:cnode20613", 00:13:31.067 "model_number": "\u007fuP$RIzEskd=g\"h'\''Ip`#Q\u007f#n\u007fc*0~U4^t", 00:13:31.067 "method": "nvmf_create_subsystem", 00:13:31.067 "req_id": 1 00:13:31.067 } 00:13:31.067 Got JSON-RPC error response 00:13:31.067 response: 00:13:31.067 { 00:13:31.067 "code": -32602, 00:13:31.067 "message": "Invalid MN \u007fuP$RIzEskd=g\"h'\''Ip`#Q\u007f#n\u007fc*0~U4^t" 00:13:31.067 }' 00:13:31.067 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:31.067 { 00:13:31.067 "nqn": "nqn.2016-06.io.spdk:cnode20613", 00:13:31.067 "model_number": "\u007fuP$RIzEskd=g\"h'Ip`#Q\u007f#n\u007fc*0~U4^t", 00:13:31.067 "method": "nvmf_create_subsystem", 00:13:31.067 "req_id": 1 00:13:31.067 } 00:13:31.067 Got JSON-RPC error response 00:13:31.067 response: 00:13:31.067 { 00:13:31.067 "code": -32602, 00:13:31.067 "message": "Invalid MN \u007fuP$RIzEskd=g\"h'Ip`#Q\u007f#n\u007fc*0~U4^t" 00:13:31.067 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.067 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.067 [2024-11-26 19:51:31.860474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.328 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.328 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:31.328 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:31.328 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:31.328 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:31.328 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.588 [2024-11-26 19:51:32.245653] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.588 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:31.588 { 00:13:31.588 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.588 "listen_address": { 00:13:31.588 "trtype": "tcp", 00:13:31.588 "traddr": "", 00:13:31.588 "trsvcid": "4421" 00:13:31.588 }, 00:13:31.588 "method": "nvmf_subsystem_remove_listener", 00:13:31.588 "req_id": 1 00:13:31.588 } 00:13:31.588 Got JSON-RPC error response 00:13:31.588 response: 00:13:31.588 { 00:13:31.588 "code": -32602, 00:13:31.588 "message": "Invalid parameters" 00:13:31.588 }' 00:13:31.588 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:31.588 { 00:13:31.588 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.588 "listen_address": { 00:13:31.588 "trtype": "tcp", 00:13:31.588 "traddr": "", 00:13:31.588 "trsvcid": "4421" 00:13:31.588 }, 00:13:31.588 "method": "nvmf_subsystem_remove_listener", 00:13:31.588 "req_id": 1 00:13:31.588 } 00:13:31.588 Got JSON-RPC error response 00:13:31.588 response: 00:13:31.588 { 00:13:31.588 "code": -32602, 00:13:31.588 "message": "Invalid parameters" 00:13:31.588 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.588 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19093 -i 0 00:13:31.848 [2024-11-26 19:51:32.430195] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19093: invalid cntlid range [0-65519] 00:13:31.848 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:31.848 { 00:13:31.848 "nqn": "nqn.2016-06.io.spdk:cnode19093", 00:13:31.848 "min_cntlid": 0, 00:13:31.848 "method": "nvmf_create_subsystem", 00:13:31.848 "req_id": 1 00:13:31.848 } 00:13:31.848 Got JSON-RPC error response 00:13:31.848 response: 00:13:31.848 { 00:13:31.848 "code": -32602, 00:13:31.848 "message": "Invalid cntlid range [0-65519]" 00:13:31.848 }' 00:13:31.848 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:31.848 { 00:13:31.848 "nqn": "nqn.2016-06.io.spdk:cnode19093", 00:13:31.848 "min_cntlid": 0, 00:13:31.848 "method": "nvmf_create_subsystem", 00:13:31.848 "req_id": 1 00:13:31.848 } 00:13:31.848 Got JSON-RPC error response 00:13:31.848 response: 00:13:31.848 { 00:13:31.848 "code": -32602, 00:13:31.848 "message": "Invalid cntlid range [0-65519]" 00:13:31.848 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.848 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19538 -i 65520 00:13:31.848 [2024-11-26 19:51:32.614799] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19538: invalid cntlid range [65520-65519] 00:13:31.848 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:31.848 { 00:13:31.848 "nqn": "nqn.2016-06.io.spdk:cnode19538", 00:13:31.848 "min_cntlid": 65520, 00:13:31.848 "method": "nvmf_create_subsystem", 00:13:31.848 "req_id": 1 00:13:31.848 } 00:13:31.848 Got JSON-RPC error response 00:13:31.848 response: 00:13:31.848 { 00:13:31.848 "code": -32602, 00:13:31.848 "message": "Invalid cntlid range [65520-65519]" 00:13:31.848 }' 00:13:31.848 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:31.848 { 00:13:31.848 "nqn": "nqn.2016-06.io.spdk:cnode19538", 00:13:31.848 "min_cntlid": 65520, 00:13:31.848 "method": "nvmf_create_subsystem", 00:13:31.848 "req_id": 1 00:13:31.848 } 00:13:31.848 Got JSON-RPC error response 00:13:31.848 response: 00:13:31.848 { 00:13:31.848 "code": -32602, 00:13:31.848 "message": "Invalid cntlid range [65520-65519]" 00:13:31.848 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.849 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16535 -I 0 00:13:32.110 [2024-11-26 19:51:32.807385] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16535: invalid cntlid range [1-0] 00:13:32.110 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:32.110 { 00:13:32.110 "nqn": "nqn.2016-06.io.spdk:cnode16535", 00:13:32.110 "max_cntlid": 0, 00:13:32.110 "method": "nvmf_create_subsystem", 00:13:32.110 "req_id": 1 00:13:32.110 } 00:13:32.110 Got JSON-RPC error response 00:13:32.110 response: 00:13:32.110 { 00:13:32.110 "code": -32602, 00:13:32.110 "message": "Invalid cntlid range [1-0]" 00:13:32.110 }' 00:13:32.110 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:32.110 { 00:13:32.110 "nqn": "nqn.2016-06.io.spdk:cnode16535", 00:13:32.110 "max_cntlid": 0, 00:13:32.110 "method": "nvmf_create_subsystem", 00:13:32.110 "req_id": 1 00:13:32.110 } 00:13:32.110 Got JSON-RPC error response 00:13:32.110 response: 00:13:32.110 { 00:13:32.110 "code": -32602, 00:13:32.110 "message": "Invalid cntlid range [1-0]" 00:13:32.110 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.110 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23416 -I 65520 00:13:32.371 [2024-11-26 19:51:33.000007] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23416: invalid cntlid range [1-65520] 00:13:32.371 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:32.371 { 00:13:32.371 "nqn": "nqn.2016-06.io.spdk:cnode23416", 00:13:32.371 "max_cntlid": 65520, 00:13:32.371 "method": "nvmf_create_subsystem", 00:13:32.371 "req_id": 1 00:13:32.371 } 00:13:32.371 Got JSON-RPC error response 00:13:32.371 response: 00:13:32.371 { 00:13:32.371 "code": -32602, 00:13:32.371 "message": "Invalid cntlid range [1-65520]" 00:13:32.371 }' 00:13:32.371 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:32.371 { 00:13:32.371 "nqn": "nqn.2016-06.io.spdk:cnode23416", 00:13:32.371 "max_cntlid": 65520, 00:13:32.371 "method": "nvmf_create_subsystem", 00:13:32.371 "req_id": 1 00:13:32.371 } 00:13:32.371 Got JSON-RPC error response 00:13:32.371 response: 00:13:32.371 { 00:13:32.371 "code": -32602, 00:13:32.371 "message": "Invalid cntlid range [1-65520]" 00:13:32.371 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.371 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10357 -i 6 -I 5 00:13:32.632 [2024-11-26 19:51:33.188611] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10357: invalid cntlid range [6-5] 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:32.632 { 00:13:32.632 "nqn": "nqn.2016-06.io.spdk:cnode10357", 00:13:32.632 "min_cntlid": 6, 00:13:32.632 "max_cntlid": 5, 00:13:32.632 "method": "nvmf_create_subsystem", 00:13:32.632 "req_id": 1 00:13:32.632 } 00:13:32.632 Got JSON-RPC error response 00:13:32.632 response: 00:13:32.632 { 00:13:32.632 "code": -32602, 00:13:32.632 "message": "Invalid cntlid range [6-5]" 00:13:32.632 }' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:32.632 { 00:13:32.632 "nqn": "nqn.2016-06.io.spdk:cnode10357", 00:13:32.632 "min_cntlid": 6, 00:13:32.632 "max_cntlid": 5, 00:13:32.632 "method": "nvmf_create_subsystem", 00:13:32.632 "req_id": 1 00:13:32.632 } 00:13:32.632 Got JSON-RPC error response 00:13:32.632 response: 00:13:32.632 { 00:13:32.632 "code": -32602, 00:13:32.632 "message": "Invalid cntlid range [6-5]" 00:13:32.632 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:32.632 { 00:13:32.632 "name": "foobar", 00:13:32.632 "method": "nvmf_delete_target", 00:13:32.632 "req_id": 1 00:13:32.632 } 00:13:32.632 Got JSON-RPC error response 00:13:32.632 response: 00:13:32.632 { 00:13:32.632 "code": -32602, 00:13:32.632 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:32.632 }' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:32.632 { 00:13:32.632 "name": "foobar", 00:13:32.632 "method": "nvmf_delete_target", 00:13:32.632 "req_id": 1 00:13:32.632 } 00:13:32.632 Got JSON-RPC error response 00:13:32.632 response: 00:13:32.632 { 00:13:32.632 "code": -32602, 00:13:32.632 "message": "The specified target doesn't exist, cannot delete it." 00:13:32.632 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.632 rmmod nvme_tcp 00:13:32.632 rmmod nvme_fabrics 00:13:32.632 rmmod nvme_keyring 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3580987 ']' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3580987 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3580987 ']' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3580987 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.632 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3580987 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3580987' 00:13:32.893 killing process with pid 3580987 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3580987 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3580987 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.893 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.438 00:13:35.438 real 0m13.926s 00:13:35.438 user 0m20.616s 00:13:35.438 sys 0m6.545s 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 ************************************ 00:13:35.438 END TEST nvmf_invalid 00:13:35.438 ************************************ 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.438 ************************************ 00:13:35.438 START TEST nvmf_connect_stress 00:13:35.438 ************************************ 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.438 * Looking for test storage... 00:13:35.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:35.438 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:35.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.439 --rc genhtml_branch_coverage=1 00:13:35.439 --rc genhtml_function_coverage=1 00:13:35.439 --rc genhtml_legend=1 00:13:35.439 --rc geninfo_all_blocks=1 00:13:35.439 --rc geninfo_unexecuted_blocks=1 00:13:35.439 00:13:35.439 ' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:35.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.439 --rc genhtml_branch_coverage=1 00:13:35.439 --rc genhtml_function_coverage=1 00:13:35.439 --rc genhtml_legend=1 00:13:35.439 --rc geninfo_all_blocks=1 00:13:35.439 --rc geninfo_unexecuted_blocks=1 00:13:35.439 00:13:35.439 ' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:35.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.439 --rc genhtml_branch_coverage=1 00:13:35.439 --rc genhtml_function_coverage=1 00:13:35.439 --rc genhtml_legend=1 00:13:35.439 --rc geninfo_all_blocks=1 00:13:35.439 --rc geninfo_unexecuted_blocks=1 00:13:35.439 00:13:35.439 ' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:35.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.439 --rc genhtml_branch_coverage=1 00:13:35.439 --rc genhtml_function_coverage=1 00:13:35.439 --rc genhtml_legend=1 00:13:35.439 --rc geninfo_all_blocks=1 00:13:35.439 --rc geninfo_unexecuted_blocks=1 00:13:35.439 00:13:35.439 ' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.439 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:43.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:43.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:43.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:43.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.590 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:13:43.591 00:13:43.591 --- 10.0.0.2 ping statistics --- 00:13:43.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.591 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:13:43.591 00:13:43.591 --- 10.0.0.1 ping statistics --- 00:13:43.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.591 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3586203 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3586203 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3586203 ']' 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.591 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.591 [2024-11-26 19:51:43.568652] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:13:43.591 [2024-11-26 19:51:43.568715] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.591 [2024-11-26 19:51:43.669704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.591 [2024-11-26 19:51:43.722034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.591 [2024-11-26 19:51:43.722091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.591 [2024-11-26 19:51:43.722100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.591 [2024-11-26 19:51:43.722108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.591 [2024-11-26 19:51:43.722114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.591 [2024-11-26 19:51:43.724243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.591 [2024-11-26 19:51:43.724575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.591 [2024-11-26 19:51:43.724576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.591 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.591 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:43.591 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.591 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.591 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.852 [2024-11-26 19:51:44.454542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.852 [2024-11-26 19:51:44.480197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.852 NULL1 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3586424 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.852 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.853 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.424 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.424 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:44.424 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.424 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.424 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.686 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.686 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:44.686 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.686 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.686 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.946 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.946 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:44.946 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.946 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.946 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.207 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.207 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:45.207 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.207 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.207 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.468 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.468 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:45.468 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.468 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.468 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.039 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.040 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:46.040 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.040 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.040 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.301 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.301 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:46.301 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.301 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.301 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.563 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.563 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:46.563 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.563 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.563 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.825 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.825 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:46.825 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.825 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.825 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.087 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.087 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:47.087 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.087 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.087 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:47.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.659 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.921 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:47.921 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.921 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.921 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.183 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.183 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:48.183 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.183 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.183 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.444 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.444 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:48.444 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.444 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.444 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.704 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.704 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:48.704 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.704 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.704 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.276 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.276 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:49.276 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.276 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.276 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.538 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.538 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:49.538 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.538 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.538 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.798 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.799 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:49.799 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.799 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.799 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.059 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.059 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:50.059 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.059 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.059 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.319 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.319 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:50.319 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.319 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.319 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.891 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.891 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:50.891 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.891 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.891 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.151 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.151 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:51.151 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.151 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.151 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.412 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.412 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:51.412 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.412 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.412 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.673 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.673 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:51.673 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.673 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.673 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.244 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.244 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:52.244 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.244 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.244 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.506 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.506 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:52.506 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.506 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.506 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.767 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.767 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:52.767 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.767 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.767 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.027 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.027 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:53.027 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.027 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.027 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.287 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.287 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:53.287 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.287 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.288 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.857 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.857 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:53.857 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.857 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.857 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.857 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3586424 00:13:54.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3586424) - No such process 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3586424 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:54.117 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.118 rmmod nvme_tcp 00:13:54.118 rmmod nvme_fabrics 00:13:54.118 rmmod nvme_keyring 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3586203 ']' 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3586203 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3586203 ']' 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3586203 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3586203 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3586203' 00:13:54.118 killing process with pid 3586203 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3586203 00:13:54.118 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3586203 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.378 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.290 00:13:56.290 real 0m21.295s 00:13:56.290 user 0m42.449s 00:13:56.290 sys 0m9.309s 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.290 ************************************ 00:13:56.290 END TEST nvmf_connect_stress 00:13:56.290 ************************************ 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.290 19:51:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.552 ************************************ 00:13:56.552 START TEST nvmf_fused_ordering 00:13:56.552 ************************************ 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.552 * Looking for test storage... 00:13:56.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.552 --rc genhtml_branch_coverage=1 00:13:56.552 --rc genhtml_function_coverage=1 00:13:56.552 --rc genhtml_legend=1 00:13:56.552 --rc geninfo_all_blocks=1 00:13:56.552 --rc geninfo_unexecuted_blocks=1 00:13:56.552 00:13:56.552 ' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.552 --rc genhtml_branch_coverage=1 00:13:56.552 --rc genhtml_function_coverage=1 00:13:56.552 --rc genhtml_legend=1 00:13:56.552 --rc geninfo_all_blocks=1 00:13:56.552 --rc geninfo_unexecuted_blocks=1 00:13:56.552 00:13:56.552 ' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.552 --rc genhtml_branch_coverage=1 00:13:56.552 --rc genhtml_function_coverage=1 00:13:56.552 --rc genhtml_legend=1 00:13:56.552 --rc geninfo_all_blocks=1 00:13:56.552 --rc geninfo_unexecuted_blocks=1 00:13:56.552 00:13:56.552 ' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.552 --rc genhtml_branch_coverage=1 00:13:56.552 --rc genhtml_function_coverage=1 00:13:56.552 --rc genhtml_legend=1 00:13:56.552 --rc geninfo_all_blocks=1 00:13:56.552 --rc geninfo_unexecuted_blocks=1 00:13:56.552 00:13:56.552 ' 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.552 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.553 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.814 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.814 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.814 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.814 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.963 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:04.964 00:14:04.964 --- 10.0.0.2 ping statistics --- 00:14:04.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.964 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:04.964 00:14:04.964 --- 10.0.0.1 ping statistics --- 00:14:04.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.964 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3592708 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3592708 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3592708 ']' 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.964 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.964 [2024-11-26 19:52:04.912866] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:04.965 [2024-11-26 19:52:04.912935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.965 [2024-11-26 19:52:05.015013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.965 [2024-11-26 19:52:05.064940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.965 [2024-11-26 19:52:05.064995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.965 [2024-11-26 19:52:05.065003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.965 [2024-11-26 19:52:05.065010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.965 [2024-11-26 19:52:05.065015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.965 [2024-11-26 19:52:05.065827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.965 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.965 [2024-11-26 19:52:05.776988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 [2024-11-26 19:52:05.801318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 NULL1 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.227 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:05.227 [2024-11-26 19:52:05.871621] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:05.227 [2024-11-26 19:52:05.871663] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592816 ] 00:14:05.800 Attached to nqn.2016-06.io.spdk:cnode1 00:14:05.800 Namespace ID: 1 size: 1GB 00:14:05.800 fused_ordering(0) 00:14:05.800 fused_ordering(1) 00:14:05.800 fused_ordering(2) 00:14:05.800 fused_ordering(3) 00:14:05.800 fused_ordering(4) 00:14:05.800 fused_ordering(5) 00:14:05.800 fused_ordering(6) 00:14:05.800 fused_ordering(7) 00:14:05.800 fused_ordering(8) 00:14:05.800 fused_ordering(9) 00:14:05.800 fused_ordering(10) 00:14:05.800 fused_ordering(11) 00:14:05.800 fused_ordering(12) 00:14:05.800 fused_ordering(13) 00:14:05.800 fused_ordering(14) 00:14:05.800 fused_ordering(15) 00:14:05.800 fused_ordering(16) 00:14:05.800 fused_ordering(17) 00:14:05.800 fused_ordering(18) 00:14:05.800 fused_ordering(19) 00:14:05.800 fused_ordering(20) 00:14:05.800 fused_ordering(21) 00:14:05.800 fused_ordering(22) 00:14:05.800 fused_ordering(23) 00:14:05.800 fused_ordering(24) 00:14:05.800 fused_ordering(25) 00:14:05.800 fused_ordering(26) 00:14:05.800 fused_ordering(27) 00:14:05.800 fused_ordering(28) 00:14:05.800 fused_ordering(29) 00:14:05.800 fused_ordering(30) 00:14:05.800 fused_ordering(31) 00:14:05.800 fused_ordering(32) 00:14:05.800 fused_ordering(33) 00:14:05.800 fused_ordering(34) 00:14:05.800 fused_ordering(35) 00:14:05.800 fused_ordering(36) 00:14:05.800 fused_ordering(37) 00:14:05.800 fused_ordering(38) 00:14:05.800 fused_ordering(39) 00:14:05.800 fused_ordering(40) 00:14:05.800 fused_ordering(41) 00:14:05.800 fused_ordering(42) 00:14:05.800 fused_ordering(43) 00:14:05.800 fused_ordering(44) 00:14:05.800 fused_ordering(45) 00:14:05.800 fused_ordering(46) 00:14:05.801 fused_ordering(47) 00:14:05.801 fused_ordering(48) 00:14:05.801 fused_ordering(49) 00:14:05.801 fused_ordering(50) 00:14:05.801 fused_ordering(51) 00:14:05.801 fused_ordering(52) 00:14:05.801 fused_ordering(53) 00:14:05.801 fused_ordering(54) 00:14:05.801 fused_ordering(55) 00:14:05.801 fused_ordering(56) 00:14:05.801 fused_ordering(57) 00:14:05.801 fused_ordering(58) 00:14:05.801 fused_ordering(59) 00:14:05.801 fused_ordering(60) 00:14:05.801 fused_ordering(61) 00:14:05.801 fused_ordering(62) 00:14:05.801 fused_ordering(63) 00:14:05.801 fused_ordering(64) 00:14:05.801 fused_ordering(65) 00:14:05.801 fused_ordering(66) 00:14:05.801 fused_ordering(67) 00:14:05.801 fused_ordering(68) 00:14:05.801 fused_ordering(69) 00:14:05.801 fused_ordering(70) 00:14:05.801 fused_ordering(71) 00:14:05.801 fused_ordering(72) 00:14:05.801 fused_ordering(73) 00:14:05.801 fused_ordering(74) 00:14:05.801 fused_ordering(75) 00:14:05.801 fused_ordering(76) 00:14:05.801 fused_ordering(77) 00:14:05.801 fused_ordering(78) 00:14:05.801 fused_ordering(79) 00:14:05.801 fused_ordering(80) 00:14:05.801 fused_ordering(81) 00:14:05.801 fused_ordering(82) 00:14:05.801 fused_ordering(83) 00:14:05.801 fused_ordering(84) 00:14:05.801 fused_ordering(85) 00:14:05.801 fused_ordering(86) 00:14:05.801 fused_ordering(87) 00:14:05.801 fused_ordering(88) 00:14:05.801 fused_ordering(89) 00:14:05.801 fused_ordering(90) 00:14:05.801 fused_ordering(91) 00:14:05.801 fused_ordering(92) 00:14:05.801 fused_ordering(93) 00:14:05.801 fused_ordering(94) 00:14:05.801 fused_ordering(95) 00:14:05.801 fused_ordering(96) 00:14:05.801 fused_ordering(97) 00:14:05.801 fused_ordering(98) 00:14:05.801 fused_ordering(99) 00:14:05.801 fused_ordering(100) 00:14:05.801 fused_ordering(101) 00:14:05.801 fused_ordering(102) 00:14:05.801 fused_ordering(103) 00:14:05.801 fused_ordering(104) 00:14:05.801 fused_ordering(105) 00:14:05.801 fused_ordering(106) 00:14:05.801 fused_ordering(107) 00:14:05.801 fused_ordering(108) 00:14:05.801 fused_ordering(109) 00:14:05.801 fused_ordering(110) 00:14:05.801 fused_ordering(111) 00:14:05.801 fused_ordering(112) 00:14:05.801 fused_ordering(113) 00:14:05.801 fused_ordering(114) 00:14:05.801 fused_ordering(115) 00:14:05.801 fused_ordering(116) 00:14:05.801 fused_ordering(117) 00:14:05.801 fused_ordering(118) 00:14:05.801 fused_ordering(119) 00:14:05.801 fused_ordering(120) 00:14:05.801 fused_ordering(121) 00:14:05.801 fused_ordering(122) 00:14:05.801 fused_ordering(123) 00:14:05.801 fused_ordering(124) 00:14:05.801 fused_ordering(125) 00:14:05.801 fused_ordering(126) 00:14:05.801 fused_ordering(127) 00:14:05.801 fused_ordering(128) 00:14:05.801 fused_ordering(129) 00:14:05.801 fused_ordering(130) 00:14:05.801 fused_ordering(131) 00:14:05.801 fused_ordering(132) 00:14:05.801 fused_ordering(133) 00:14:05.801 fused_ordering(134) 00:14:05.801 fused_ordering(135) 00:14:05.801 fused_ordering(136) 00:14:05.801 fused_ordering(137) 00:14:05.801 fused_ordering(138) 00:14:05.801 fused_ordering(139) 00:14:05.801 fused_ordering(140) 00:14:05.801 fused_ordering(141) 00:14:05.801 fused_ordering(142) 00:14:05.801 fused_ordering(143) 00:14:05.801 fused_ordering(144) 00:14:05.801 fused_ordering(145) 00:14:05.801 fused_ordering(146) 00:14:05.801 fused_ordering(147) 00:14:05.801 fused_ordering(148) 00:14:05.801 fused_ordering(149) 00:14:05.801 fused_ordering(150) 00:14:05.801 fused_ordering(151) 00:14:05.801 fused_ordering(152) 00:14:05.801 fused_ordering(153) 00:14:05.801 fused_ordering(154) 00:14:05.801 fused_ordering(155) 00:14:05.801 fused_ordering(156) 00:14:05.801 fused_ordering(157) 00:14:05.801 fused_ordering(158) 00:14:05.801 fused_ordering(159) 00:14:05.801 fused_ordering(160) 00:14:05.801 fused_ordering(161) 00:14:05.801 fused_ordering(162) 00:14:05.801 fused_ordering(163) 00:14:05.801 fused_ordering(164) 00:14:05.801 fused_ordering(165) 00:14:05.801 fused_ordering(166) 00:14:05.801 fused_ordering(167) 00:14:05.801 fused_ordering(168) 00:14:05.801 fused_ordering(169) 00:14:05.801 fused_ordering(170) 00:14:05.801 fused_ordering(171) 00:14:05.801 fused_ordering(172) 00:14:05.801 fused_ordering(173) 00:14:05.801 fused_ordering(174) 00:14:05.801 fused_ordering(175) 00:14:05.801 fused_ordering(176) 00:14:05.801 fused_ordering(177) 00:14:05.801 fused_ordering(178) 00:14:05.801 fused_ordering(179) 00:14:05.801 fused_ordering(180) 00:14:05.801 fused_ordering(181) 00:14:05.801 fused_ordering(182) 00:14:05.801 fused_ordering(183) 00:14:05.801 fused_ordering(184) 00:14:05.801 fused_ordering(185) 00:14:05.801 fused_ordering(186) 00:14:05.801 fused_ordering(187) 00:14:05.801 fused_ordering(188) 00:14:05.801 fused_ordering(189) 00:14:05.801 fused_ordering(190) 00:14:05.801 fused_ordering(191) 00:14:05.801 fused_ordering(192) 00:14:05.801 fused_ordering(193) 00:14:05.801 fused_ordering(194) 00:14:05.801 fused_ordering(195) 00:14:05.801 fused_ordering(196) 00:14:05.801 fused_ordering(197) 00:14:05.801 fused_ordering(198) 00:14:05.801 fused_ordering(199) 00:14:05.801 fused_ordering(200) 00:14:05.801 fused_ordering(201) 00:14:05.801 fused_ordering(202) 00:14:05.801 fused_ordering(203) 00:14:05.801 fused_ordering(204) 00:14:05.801 fused_ordering(205) 00:14:06.062 fused_ordering(206) 00:14:06.062 fused_ordering(207) 00:14:06.062 fused_ordering(208) 00:14:06.062 fused_ordering(209) 00:14:06.062 fused_ordering(210) 00:14:06.062 fused_ordering(211) 00:14:06.062 fused_ordering(212) 00:14:06.062 fused_ordering(213) 00:14:06.062 fused_ordering(214) 00:14:06.062 fused_ordering(215) 00:14:06.062 fused_ordering(216) 00:14:06.062 fused_ordering(217) 00:14:06.062 fused_ordering(218) 00:14:06.062 fused_ordering(219) 00:14:06.062 fused_ordering(220) 00:14:06.062 fused_ordering(221) 00:14:06.062 fused_ordering(222) 00:14:06.062 fused_ordering(223) 00:14:06.062 fused_ordering(224) 00:14:06.062 fused_ordering(225) 00:14:06.062 fused_ordering(226) 00:14:06.062 fused_ordering(227) 00:14:06.062 fused_ordering(228) 00:14:06.062 fused_ordering(229) 00:14:06.062 fused_ordering(230) 00:14:06.062 fused_ordering(231) 00:14:06.062 fused_ordering(232) 00:14:06.062 fused_ordering(233) 00:14:06.062 fused_ordering(234) 00:14:06.062 fused_ordering(235) 00:14:06.062 fused_ordering(236) 00:14:06.062 fused_ordering(237) 00:14:06.062 fused_ordering(238) 00:14:06.062 fused_ordering(239) 00:14:06.062 fused_ordering(240) 00:14:06.062 fused_ordering(241) 00:14:06.062 fused_ordering(242) 00:14:06.062 fused_ordering(243) 00:14:06.062 fused_ordering(244) 00:14:06.062 fused_ordering(245) 00:14:06.062 fused_ordering(246) 00:14:06.062 fused_ordering(247) 00:14:06.062 fused_ordering(248) 00:14:06.062 fused_ordering(249) 00:14:06.062 fused_ordering(250) 00:14:06.062 fused_ordering(251) 00:14:06.062 fused_ordering(252) 00:14:06.062 fused_ordering(253) 00:14:06.062 fused_ordering(254) 00:14:06.062 fused_ordering(255) 00:14:06.062 fused_ordering(256) 00:14:06.062 fused_ordering(257) 00:14:06.062 fused_ordering(258) 00:14:06.062 fused_ordering(259) 00:14:06.062 fused_ordering(260) 00:14:06.062 fused_ordering(261) 00:14:06.062 fused_ordering(262) 00:14:06.062 fused_ordering(263) 00:14:06.062 fused_ordering(264) 00:14:06.062 fused_ordering(265) 00:14:06.062 fused_ordering(266) 00:14:06.062 fused_ordering(267) 00:14:06.062 fused_ordering(268) 00:14:06.062 fused_ordering(269) 00:14:06.062 fused_ordering(270) 00:14:06.062 fused_ordering(271) 00:14:06.062 fused_ordering(272) 00:14:06.062 fused_ordering(273) 00:14:06.062 fused_ordering(274) 00:14:06.062 fused_ordering(275) 00:14:06.062 fused_ordering(276) 00:14:06.062 fused_ordering(277) 00:14:06.062 fused_ordering(278) 00:14:06.062 fused_ordering(279) 00:14:06.062 fused_ordering(280) 00:14:06.062 fused_ordering(281) 00:14:06.062 fused_ordering(282) 00:14:06.062 fused_ordering(283) 00:14:06.062 fused_ordering(284) 00:14:06.062 fused_ordering(285) 00:14:06.062 fused_ordering(286) 00:14:06.062 fused_ordering(287) 00:14:06.062 fused_ordering(288) 00:14:06.062 fused_ordering(289) 00:14:06.062 fused_ordering(290) 00:14:06.062 fused_ordering(291) 00:14:06.062 fused_ordering(292) 00:14:06.062 fused_ordering(293) 00:14:06.063 fused_ordering(294) 00:14:06.063 fused_ordering(295) 00:14:06.063 fused_ordering(296) 00:14:06.063 fused_ordering(297) 00:14:06.063 fused_ordering(298) 00:14:06.063 fused_ordering(299) 00:14:06.063 fused_ordering(300) 00:14:06.063 fused_ordering(301) 00:14:06.063 fused_ordering(302) 00:14:06.063 fused_ordering(303) 00:14:06.063 fused_ordering(304) 00:14:06.063 fused_ordering(305) 00:14:06.063 fused_ordering(306) 00:14:06.063 fused_ordering(307) 00:14:06.063 fused_ordering(308) 00:14:06.063 fused_ordering(309) 00:14:06.063 fused_ordering(310) 00:14:06.063 fused_ordering(311) 00:14:06.063 fused_ordering(312) 00:14:06.063 fused_ordering(313) 00:14:06.063 fused_ordering(314) 00:14:06.063 fused_ordering(315) 00:14:06.063 fused_ordering(316) 00:14:06.063 fused_ordering(317) 00:14:06.063 fused_ordering(318) 00:14:06.063 fused_ordering(319) 00:14:06.063 fused_ordering(320) 00:14:06.063 fused_ordering(321) 00:14:06.063 fused_ordering(322) 00:14:06.063 fused_ordering(323) 00:14:06.063 fused_ordering(324) 00:14:06.063 fused_ordering(325) 00:14:06.063 fused_ordering(326) 00:14:06.063 fused_ordering(327) 00:14:06.063 fused_ordering(328) 00:14:06.063 fused_ordering(329) 00:14:06.063 fused_ordering(330) 00:14:06.063 fused_ordering(331) 00:14:06.063 fused_ordering(332) 00:14:06.063 fused_ordering(333) 00:14:06.063 fused_ordering(334) 00:14:06.063 fused_ordering(335) 00:14:06.063 fused_ordering(336) 00:14:06.063 fused_ordering(337) 00:14:06.063 fused_ordering(338) 00:14:06.063 fused_ordering(339) 00:14:06.063 fused_ordering(340) 00:14:06.063 fused_ordering(341) 00:14:06.063 fused_ordering(342) 00:14:06.063 fused_ordering(343) 00:14:06.063 fused_ordering(344) 00:14:06.063 fused_ordering(345) 00:14:06.063 fused_ordering(346) 00:14:06.063 fused_ordering(347) 00:14:06.063 fused_ordering(348) 00:14:06.063 fused_ordering(349) 00:14:06.063 fused_ordering(350) 00:14:06.063 fused_ordering(351) 00:14:06.063 fused_ordering(352) 00:14:06.063 fused_ordering(353) 00:14:06.063 fused_ordering(354) 00:14:06.063 fused_ordering(355) 00:14:06.063 fused_ordering(356) 00:14:06.063 fused_ordering(357) 00:14:06.063 fused_ordering(358) 00:14:06.063 fused_ordering(359) 00:14:06.063 fused_ordering(360) 00:14:06.063 fused_ordering(361) 00:14:06.063 fused_ordering(362) 00:14:06.063 fused_ordering(363) 00:14:06.063 fused_ordering(364) 00:14:06.063 fused_ordering(365) 00:14:06.063 fused_ordering(366) 00:14:06.063 fused_ordering(367) 00:14:06.063 fused_ordering(368) 00:14:06.063 fused_ordering(369) 00:14:06.063 fused_ordering(370) 00:14:06.063 fused_ordering(371) 00:14:06.063 fused_ordering(372) 00:14:06.063 fused_ordering(373) 00:14:06.063 fused_ordering(374) 00:14:06.063 fused_ordering(375) 00:14:06.063 fused_ordering(376) 00:14:06.063 fused_ordering(377) 00:14:06.063 fused_ordering(378) 00:14:06.063 fused_ordering(379) 00:14:06.063 fused_ordering(380) 00:14:06.063 fused_ordering(381) 00:14:06.063 fused_ordering(382) 00:14:06.063 fused_ordering(383) 00:14:06.063 fused_ordering(384) 00:14:06.063 fused_ordering(385) 00:14:06.063 fused_ordering(386) 00:14:06.063 fused_ordering(387) 00:14:06.063 fused_ordering(388) 00:14:06.063 fused_ordering(389) 00:14:06.063 fused_ordering(390) 00:14:06.063 fused_ordering(391) 00:14:06.063 fused_ordering(392) 00:14:06.063 fused_ordering(393) 00:14:06.063 fused_ordering(394) 00:14:06.063 fused_ordering(395) 00:14:06.063 fused_ordering(396) 00:14:06.063 fused_ordering(397) 00:14:06.063 fused_ordering(398) 00:14:06.063 fused_ordering(399) 00:14:06.063 fused_ordering(400) 00:14:06.063 fused_ordering(401) 00:14:06.063 fused_ordering(402) 00:14:06.063 fused_ordering(403) 00:14:06.063 fused_ordering(404) 00:14:06.063 fused_ordering(405) 00:14:06.063 fused_ordering(406) 00:14:06.063 fused_ordering(407) 00:14:06.063 fused_ordering(408) 00:14:06.063 fused_ordering(409) 00:14:06.063 fused_ordering(410) 00:14:06.324 fused_ordering(411) 00:14:06.324 fused_ordering(412) 00:14:06.324 fused_ordering(413) 00:14:06.324 fused_ordering(414) 00:14:06.324 fused_ordering(415) 00:14:06.324 fused_ordering(416) 00:14:06.324 fused_ordering(417) 00:14:06.324 fused_ordering(418) 00:14:06.324 fused_ordering(419) 00:14:06.324 fused_ordering(420) 00:14:06.324 fused_ordering(421) 00:14:06.324 fused_ordering(422) 00:14:06.324 fused_ordering(423) 00:14:06.324 fused_ordering(424) 00:14:06.324 fused_ordering(425) 00:14:06.325 fused_ordering(426) 00:14:06.325 fused_ordering(427) 00:14:06.325 fused_ordering(428) 00:14:06.325 fused_ordering(429) 00:14:06.325 fused_ordering(430) 00:14:06.325 fused_ordering(431) 00:14:06.325 fused_ordering(432) 00:14:06.325 fused_ordering(433) 00:14:06.325 fused_ordering(434) 00:14:06.325 fused_ordering(435) 00:14:06.325 fused_ordering(436) 00:14:06.325 fused_ordering(437) 00:14:06.325 fused_ordering(438) 00:14:06.325 fused_ordering(439) 00:14:06.325 fused_ordering(440) 00:14:06.325 fused_ordering(441) 00:14:06.325 fused_ordering(442) 00:14:06.325 fused_ordering(443) 00:14:06.325 fused_ordering(444) 00:14:06.325 fused_ordering(445) 00:14:06.325 fused_ordering(446) 00:14:06.325 fused_ordering(447) 00:14:06.325 fused_ordering(448) 00:14:06.325 fused_ordering(449) 00:14:06.325 fused_ordering(450) 00:14:06.325 fused_ordering(451) 00:14:06.325 fused_ordering(452) 00:14:06.325 fused_ordering(453) 00:14:06.325 fused_ordering(454) 00:14:06.325 fused_ordering(455) 00:14:06.325 fused_ordering(456) 00:14:06.325 fused_ordering(457) 00:14:06.325 fused_ordering(458) 00:14:06.325 fused_ordering(459) 00:14:06.325 fused_ordering(460) 00:14:06.325 fused_ordering(461) 00:14:06.325 fused_ordering(462) 00:14:06.325 fused_ordering(463) 00:14:06.325 fused_ordering(464) 00:14:06.325 fused_ordering(465) 00:14:06.325 fused_ordering(466) 00:14:06.325 fused_ordering(467) 00:14:06.325 fused_ordering(468) 00:14:06.325 fused_ordering(469) 00:14:06.325 fused_ordering(470) 00:14:06.325 fused_ordering(471) 00:14:06.325 fused_ordering(472) 00:14:06.325 fused_ordering(473) 00:14:06.325 fused_ordering(474) 00:14:06.325 fused_ordering(475) 00:14:06.325 fused_ordering(476) 00:14:06.325 fused_ordering(477) 00:14:06.325 fused_ordering(478) 00:14:06.325 fused_ordering(479) 00:14:06.325 fused_ordering(480) 00:14:06.325 fused_ordering(481) 00:14:06.325 fused_ordering(482) 00:14:06.325 fused_ordering(483) 00:14:06.325 fused_ordering(484) 00:14:06.325 fused_ordering(485) 00:14:06.325 fused_ordering(486) 00:14:06.325 fused_ordering(487) 00:14:06.325 fused_ordering(488) 00:14:06.325 fused_ordering(489) 00:14:06.325 fused_ordering(490) 00:14:06.325 fused_ordering(491) 00:14:06.325 fused_ordering(492) 00:14:06.325 fused_ordering(493) 00:14:06.325 fused_ordering(494) 00:14:06.325 fused_ordering(495) 00:14:06.325 fused_ordering(496) 00:14:06.325 fused_ordering(497) 00:14:06.325 fused_ordering(498) 00:14:06.325 fused_ordering(499) 00:14:06.325 fused_ordering(500) 00:14:06.325 fused_ordering(501) 00:14:06.325 fused_ordering(502) 00:14:06.325 fused_ordering(503) 00:14:06.325 fused_ordering(504) 00:14:06.325 fused_ordering(505) 00:14:06.325 fused_ordering(506) 00:14:06.325 fused_ordering(507) 00:14:06.325 fused_ordering(508) 00:14:06.325 fused_ordering(509) 00:14:06.325 fused_ordering(510) 00:14:06.325 fused_ordering(511) 00:14:06.325 fused_ordering(512) 00:14:06.325 fused_ordering(513) 00:14:06.325 fused_ordering(514) 00:14:06.325 fused_ordering(515) 00:14:06.325 fused_ordering(516) 00:14:06.325 fused_ordering(517) 00:14:06.325 fused_ordering(518) 00:14:06.325 fused_ordering(519) 00:14:06.325 fused_ordering(520) 00:14:06.325 fused_ordering(521) 00:14:06.325 fused_ordering(522) 00:14:06.325 fused_ordering(523) 00:14:06.325 fused_ordering(524) 00:14:06.325 fused_ordering(525) 00:14:06.325 fused_ordering(526) 00:14:06.325 fused_ordering(527) 00:14:06.325 fused_ordering(528) 00:14:06.325 fused_ordering(529) 00:14:06.325 fused_ordering(530) 00:14:06.325 fused_ordering(531) 00:14:06.325 fused_ordering(532) 00:14:06.325 fused_ordering(533) 00:14:06.325 fused_ordering(534) 00:14:06.325 fused_ordering(535) 00:14:06.325 fused_ordering(536) 00:14:06.325 fused_ordering(537) 00:14:06.325 fused_ordering(538) 00:14:06.325 fused_ordering(539) 00:14:06.325 fused_ordering(540) 00:14:06.325 fused_ordering(541) 00:14:06.325 fused_ordering(542) 00:14:06.325 fused_ordering(543) 00:14:06.325 fused_ordering(544) 00:14:06.325 fused_ordering(545) 00:14:06.325 fused_ordering(546) 00:14:06.325 fused_ordering(547) 00:14:06.325 fused_ordering(548) 00:14:06.325 fused_ordering(549) 00:14:06.325 fused_ordering(550) 00:14:06.325 fused_ordering(551) 00:14:06.325 fused_ordering(552) 00:14:06.325 fused_ordering(553) 00:14:06.325 fused_ordering(554) 00:14:06.325 fused_ordering(555) 00:14:06.325 fused_ordering(556) 00:14:06.325 fused_ordering(557) 00:14:06.325 fused_ordering(558) 00:14:06.325 fused_ordering(559) 00:14:06.325 fused_ordering(560) 00:14:06.325 fused_ordering(561) 00:14:06.325 fused_ordering(562) 00:14:06.325 fused_ordering(563) 00:14:06.325 fused_ordering(564) 00:14:06.325 fused_ordering(565) 00:14:06.325 fused_ordering(566) 00:14:06.325 fused_ordering(567) 00:14:06.325 fused_ordering(568) 00:14:06.325 fused_ordering(569) 00:14:06.325 fused_ordering(570) 00:14:06.325 fused_ordering(571) 00:14:06.325 fused_ordering(572) 00:14:06.325 fused_ordering(573) 00:14:06.325 fused_ordering(574) 00:14:06.325 fused_ordering(575) 00:14:06.325 fused_ordering(576) 00:14:06.325 fused_ordering(577) 00:14:06.325 fused_ordering(578) 00:14:06.325 fused_ordering(579) 00:14:06.325 fused_ordering(580) 00:14:06.325 fused_ordering(581) 00:14:06.325 fused_ordering(582) 00:14:06.325 fused_ordering(583) 00:14:06.325 fused_ordering(584) 00:14:06.325 fused_ordering(585) 00:14:06.325 fused_ordering(586) 00:14:06.325 fused_ordering(587) 00:14:06.325 fused_ordering(588) 00:14:06.325 fused_ordering(589) 00:14:06.325 fused_ordering(590) 00:14:06.325 fused_ordering(591) 00:14:06.325 fused_ordering(592) 00:14:06.325 fused_ordering(593) 00:14:06.325 fused_ordering(594) 00:14:06.325 fused_ordering(595) 00:14:06.325 fused_ordering(596) 00:14:06.325 fused_ordering(597) 00:14:06.325 fused_ordering(598) 00:14:06.325 fused_ordering(599) 00:14:06.325 fused_ordering(600) 00:14:06.325 fused_ordering(601) 00:14:06.325 fused_ordering(602) 00:14:06.325 fused_ordering(603) 00:14:06.325 fused_ordering(604) 00:14:06.325 fused_ordering(605) 00:14:06.325 fused_ordering(606) 00:14:06.325 fused_ordering(607) 00:14:06.325 fused_ordering(608) 00:14:06.325 fused_ordering(609) 00:14:06.325 fused_ordering(610) 00:14:06.326 fused_ordering(611) 00:14:06.326 fused_ordering(612) 00:14:06.326 fused_ordering(613) 00:14:06.326 fused_ordering(614) 00:14:06.326 fused_ordering(615) 00:14:06.898 fused_ordering(616) 00:14:06.898 fused_ordering(617) 00:14:06.898 fused_ordering(618) 00:14:06.898 fused_ordering(619) 00:14:06.898 fused_ordering(620) 00:14:06.898 fused_ordering(621) 00:14:06.898 fused_ordering(622) 00:14:06.898 fused_ordering(623) 00:14:06.898 fused_ordering(624) 00:14:06.898 fused_ordering(625) 00:14:06.898 fused_ordering(626) 00:14:06.898 fused_ordering(627) 00:14:06.898 fused_ordering(628) 00:14:06.898 fused_ordering(629) 00:14:06.898 fused_ordering(630) 00:14:06.898 fused_ordering(631) 00:14:06.898 fused_ordering(632) 00:14:06.898 fused_ordering(633) 00:14:06.898 fused_ordering(634) 00:14:06.898 fused_ordering(635) 00:14:06.898 fused_ordering(636) 00:14:06.898 fused_ordering(637) 00:14:06.898 fused_ordering(638) 00:14:06.898 fused_ordering(639) 00:14:06.898 fused_ordering(640) 00:14:06.898 fused_ordering(641) 00:14:06.898 fused_ordering(642) 00:14:06.898 fused_ordering(643) 00:14:06.898 fused_ordering(644) 00:14:06.898 fused_ordering(645) 00:14:06.898 fused_ordering(646) 00:14:06.898 fused_ordering(647) 00:14:06.898 fused_ordering(648) 00:14:06.898 fused_ordering(649) 00:14:06.898 fused_ordering(650) 00:14:06.898 fused_ordering(651) 00:14:06.898 fused_ordering(652) 00:14:06.898 fused_ordering(653) 00:14:06.898 fused_ordering(654) 00:14:06.898 fused_ordering(655) 00:14:06.898 fused_ordering(656) 00:14:06.898 fused_ordering(657) 00:14:06.898 fused_ordering(658) 00:14:06.898 fused_ordering(659) 00:14:06.898 fused_ordering(660) 00:14:06.898 fused_ordering(661) 00:14:06.898 fused_ordering(662) 00:14:06.898 fused_ordering(663) 00:14:06.898 fused_ordering(664) 00:14:06.898 fused_ordering(665) 00:14:06.898 fused_ordering(666) 00:14:06.898 fused_ordering(667) 00:14:06.898 fused_ordering(668) 00:14:06.898 fused_ordering(669) 00:14:06.898 fused_ordering(670) 00:14:06.898 fused_ordering(671) 00:14:06.898 fused_ordering(672) 00:14:06.898 fused_ordering(673) 00:14:06.898 fused_ordering(674) 00:14:06.898 fused_ordering(675) 00:14:06.898 fused_ordering(676) 00:14:06.898 fused_ordering(677) 00:14:06.898 fused_ordering(678) 00:14:06.898 fused_ordering(679) 00:14:06.898 fused_ordering(680) 00:14:06.898 fused_ordering(681) 00:14:06.898 fused_ordering(682) 00:14:06.898 fused_ordering(683) 00:14:06.898 fused_ordering(684) 00:14:06.898 fused_ordering(685) 00:14:06.898 fused_ordering(686) 00:14:06.898 fused_ordering(687) 00:14:06.898 fused_ordering(688) 00:14:06.898 fused_ordering(689) 00:14:06.898 fused_ordering(690) 00:14:06.898 fused_ordering(691) 00:14:06.898 fused_ordering(692) 00:14:06.898 fused_ordering(693) 00:14:06.898 fused_ordering(694) 00:14:06.898 fused_ordering(695) 00:14:06.898 fused_ordering(696) 00:14:06.898 fused_ordering(697) 00:14:06.898 fused_ordering(698) 00:14:06.898 fused_ordering(699) 00:14:06.898 fused_ordering(700) 00:14:06.898 fused_ordering(701) 00:14:06.898 fused_ordering(702) 00:14:06.898 fused_ordering(703) 00:14:06.898 fused_ordering(704) 00:14:06.898 fused_ordering(705) 00:14:06.898 fused_ordering(706) 00:14:06.898 fused_ordering(707) 00:14:06.898 fused_ordering(708) 00:14:06.898 fused_ordering(709) 00:14:06.898 fused_ordering(710) 00:14:06.898 fused_ordering(711) 00:14:06.898 fused_ordering(712) 00:14:06.898 fused_ordering(713) 00:14:06.898 fused_ordering(714) 00:14:06.898 fused_ordering(715) 00:14:06.898 fused_ordering(716) 00:14:06.898 fused_ordering(717) 00:14:06.898 fused_ordering(718) 00:14:06.898 fused_ordering(719) 00:14:06.898 fused_ordering(720) 00:14:06.898 fused_ordering(721) 00:14:06.898 fused_ordering(722) 00:14:06.898 fused_ordering(723) 00:14:06.898 fused_ordering(724) 00:14:06.898 fused_ordering(725) 00:14:06.898 fused_ordering(726) 00:14:06.898 fused_ordering(727) 00:14:06.898 fused_ordering(728) 00:14:06.898 fused_ordering(729) 00:14:06.898 fused_ordering(730) 00:14:06.898 fused_ordering(731) 00:14:06.898 fused_ordering(732) 00:14:06.898 fused_ordering(733) 00:14:06.898 fused_ordering(734) 00:14:06.898 fused_ordering(735) 00:14:06.898 fused_ordering(736) 00:14:06.898 fused_ordering(737) 00:14:06.898 fused_ordering(738) 00:14:06.898 fused_ordering(739) 00:14:06.898 fused_ordering(740) 00:14:06.898 fused_ordering(741) 00:14:06.898 fused_ordering(742) 00:14:06.898 fused_ordering(743) 00:14:06.898 fused_ordering(744) 00:14:06.898 fused_ordering(745) 00:14:06.898 fused_ordering(746) 00:14:06.898 fused_ordering(747) 00:14:06.898 fused_ordering(748) 00:14:06.898 fused_ordering(749) 00:14:06.898 fused_ordering(750) 00:14:06.898 fused_ordering(751) 00:14:06.898 fused_ordering(752) 00:14:06.898 fused_ordering(753) 00:14:06.898 fused_ordering(754) 00:14:06.898 fused_ordering(755) 00:14:06.898 fused_ordering(756) 00:14:06.898 fused_ordering(757) 00:14:06.898 fused_ordering(758) 00:14:06.898 fused_ordering(759) 00:14:06.898 fused_ordering(760) 00:14:06.898 fused_ordering(761) 00:14:06.898 fused_ordering(762) 00:14:06.898 fused_ordering(763) 00:14:06.898 fused_ordering(764) 00:14:06.898 fused_ordering(765) 00:14:06.898 fused_ordering(766) 00:14:06.898 fused_ordering(767) 00:14:06.898 fused_ordering(768) 00:14:06.898 fused_ordering(769) 00:14:06.898 fused_ordering(770) 00:14:06.898 fused_ordering(771) 00:14:06.898 fused_ordering(772) 00:14:06.898 fused_ordering(773) 00:14:06.898 fused_ordering(774) 00:14:06.898 fused_ordering(775) 00:14:06.898 fused_ordering(776) 00:14:06.898 fused_ordering(777) 00:14:06.898 fused_ordering(778) 00:14:06.898 fused_ordering(779) 00:14:06.898 fused_ordering(780) 00:14:06.898 fused_ordering(781) 00:14:06.898 fused_ordering(782) 00:14:06.898 fused_ordering(783) 00:14:06.898 fused_ordering(784) 00:14:06.898 fused_ordering(785) 00:14:06.898 fused_ordering(786) 00:14:06.898 fused_ordering(787) 00:14:06.898 fused_ordering(788) 00:14:06.898 fused_ordering(789) 00:14:06.898 fused_ordering(790) 00:14:06.898 fused_ordering(791) 00:14:06.898 fused_ordering(792) 00:14:06.898 fused_ordering(793) 00:14:06.898 fused_ordering(794) 00:14:06.898 fused_ordering(795) 00:14:06.898 fused_ordering(796) 00:14:06.898 fused_ordering(797) 00:14:06.898 fused_ordering(798) 00:14:06.898 fused_ordering(799) 00:14:06.898 fused_ordering(800) 00:14:06.898 fused_ordering(801) 00:14:06.898 fused_ordering(802) 00:14:06.898 fused_ordering(803) 00:14:06.898 fused_ordering(804) 00:14:06.898 fused_ordering(805) 00:14:06.898 fused_ordering(806) 00:14:06.898 fused_ordering(807) 00:14:06.898 fused_ordering(808) 00:14:06.899 fused_ordering(809) 00:14:06.899 fused_ordering(810) 00:14:06.899 fused_ordering(811) 00:14:06.899 fused_ordering(812) 00:14:06.899 fused_ordering(813) 00:14:06.899 fused_ordering(814) 00:14:06.899 fused_ordering(815) 00:14:06.899 fused_ordering(816) 00:14:06.899 fused_ordering(817) 00:14:06.899 fused_ordering(818) 00:14:06.899 fused_ordering(819) 00:14:06.899 fused_ordering(820) 00:14:07.491 fused_ordering(821) 00:14:07.491 fused_ordering(822) 00:14:07.491 fused_ordering(823) 00:14:07.491 fused_ordering(824) 00:14:07.491 fused_ordering(825) 00:14:07.491 fused_ordering(826) 00:14:07.491 fused_ordering(827) 00:14:07.491 fused_ordering(828) 00:14:07.491 fused_ordering(829) 00:14:07.491 fused_ordering(830) 00:14:07.491 fused_ordering(831) 00:14:07.491 fused_ordering(832) 00:14:07.491 fused_ordering(833) 00:14:07.491 fused_ordering(834) 00:14:07.491 fused_ordering(835) 00:14:07.491 fused_ordering(836) 00:14:07.491 fused_ordering(837) 00:14:07.491 fused_ordering(838) 00:14:07.491 fused_ordering(839) 00:14:07.491 fused_ordering(840) 00:14:07.491 fused_ordering(841) 00:14:07.491 fused_ordering(842) 00:14:07.491 fused_ordering(843) 00:14:07.491 fused_ordering(844) 00:14:07.491 fused_ordering(845) 00:14:07.491 fused_ordering(846) 00:14:07.491 fused_ordering(847) 00:14:07.491 fused_ordering(848) 00:14:07.491 fused_ordering(849) 00:14:07.491 fused_ordering(850) 00:14:07.491 fused_ordering(851) 00:14:07.491 fused_ordering(852) 00:14:07.491 fused_ordering(853) 00:14:07.491 fused_ordering(854) 00:14:07.491 fused_ordering(855) 00:14:07.491 fused_ordering(856) 00:14:07.491 fused_ordering(857) 00:14:07.491 fused_ordering(858) 00:14:07.491 fused_ordering(859) 00:14:07.491 fused_ordering(860) 00:14:07.491 fused_ordering(861) 00:14:07.491 fused_ordering(862) 00:14:07.491 fused_ordering(863) 00:14:07.491 fused_ordering(864) 00:14:07.491 fused_ordering(865) 00:14:07.491 fused_ordering(866) 00:14:07.491 fused_ordering(867) 00:14:07.491 fused_ordering(868) 00:14:07.491 fused_ordering(869) 00:14:07.491 fused_ordering(870) 00:14:07.491 fused_ordering(871) 00:14:07.491 fused_ordering(872) 00:14:07.491 fused_ordering(873) 00:14:07.491 fused_ordering(874) 00:14:07.491 fused_ordering(875) 00:14:07.491 fused_ordering(876) 00:14:07.491 fused_ordering(877) 00:14:07.491 fused_ordering(878) 00:14:07.491 fused_ordering(879) 00:14:07.491 fused_ordering(880) 00:14:07.491 fused_ordering(881) 00:14:07.491 fused_ordering(882) 00:14:07.491 fused_ordering(883) 00:14:07.491 fused_ordering(884) 00:14:07.492 fused_ordering(885) 00:14:07.492 fused_ordering(886) 00:14:07.492 fused_ordering(887) 00:14:07.492 fused_ordering(888) 00:14:07.492 fused_ordering(889) 00:14:07.492 fused_ordering(890) 00:14:07.492 fused_ordering(891) 00:14:07.492 fused_ordering(892) 00:14:07.492 fused_ordering(893) 00:14:07.492 fused_ordering(894) 00:14:07.492 fused_ordering(895) 00:14:07.492 fused_ordering(896) 00:14:07.492 fused_ordering(897) 00:14:07.492 fused_ordering(898) 00:14:07.492 fused_ordering(899) 00:14:07.492 fused_ordering(900) 00:14:07.492 fused_ordering(901) 00:14:07.492 fused_ordering(902) 00:14:07.492 fused_ordering(903) 00:14:07.492 fused_ordering(904) 00:14:07.492 fused_ordering(905) 00:14:07.492 fused_ordering(906) 00:14:07.492 fused_ordering(907) 00:14:07.492 fused_ordering(908) 00:14:07.492 fused_ordering(909) 00:14:07.492 fused_ordering(910) 00:14:07.492 fused_ordering(911) 00:14:07.492 fused_ordering(912) 00:14:07.492 fused_ordering(913) 00:14:07.492 fused_ordering(914) 00:14:07.492 fused_ordering(915) 00:14:07.492 fused_ordering(916) 00:14:07.492 fused_ordering(917) 00:14:07.492 fused_ordering(918) 00:14:07.492 fused_ordering(919) 00:14:07.492 fused_ordering(920) 00:14:07.492 fused_ordering(921) 00:14:07.492 fused_ordering(922) 00:14:07.492 fused_ordering(923) 00:14:07.492 fused_ordering(924) 00:14:07.492 fused_ordering(925) 00:14:07.492 fused_ordering(926) 00:14:07.492 fused_ordering(927) 00:14:07.492 fused_ordering(928) 00:14:07.492 fused_ordering(929) 00:14:07.492 fused_ordering(930) 00:14:07.492 fused_ordering(931) 00:14:07.492 fused_ordering(932) 00:14:07.492 fused_ordering(933) 00:14:07.492 fused_ordering(934) 00:14:07.492 fused_ordering(935) 00:14:07.492 fused_ordering(936) 00:14:07.492 fused_ordering(937) 00:14:07.492 fused_ordering(938) 00:14:07.492 fused_ordering(939) 00:14:07.492 fused_ordering(940) 00:14:07.492 fused_ordering(941) 00:14:07.492 fused_ordering(942) 00:14:07.492 fused_ordering(943) 00:14:07.492 fused_ordering(944) 00:14:07.492 fused_ordering(945) 00:14:07.492 fused_ordering(946) 00:14:07.492 fused_ordering(947) 00:14:07.492 fused_ordering(948) 00:14:07.492 fused_ordering(949) 00:14:07.492 fused_ordering(950) 00:14:07.492 fused_ordering(951) 00:14:07.492 fused_ordering(952) 00:14:07.492 fused_ordering(953) 00:14:07.492 fused_ordering(954) 00:14:07.492 fused_ordering(955) 00:14:07.492 fused_ordering(956) 00:14:07.492 fused_ordering(957) 00:14:07.492 fused_ordering(958) 00:14:07.492 fused_ordering(959) 00:14:07.492 fused_ordering(960) 00:14:07.492 fused_ordering(961) 00:14:07.492 fused_ordering(962) 00:14:07.492 fused_ordering(963) 00:14:07.492 fused_ordering(964) 00:14:07.492 fused_ordering(965) 00:14:07.492 fused_ordering(966) 00:14:07.492 fused_ordering(967) 00:14:07.492 fused_ordering(968) 00:14:07.492 fused_ordering(969) 00:14:07.492 fused_ordering(970) 00:14:07.492 fused_ordering(971) 00:14:07.492 fused_ordering(972) 00:14:07.492 fused_ordering(973) 00:14:07.492 fused_ordering(974) 00:14:07.492 fused_ordering(975) 00:14:07.492 fused_ordering(976) 00:14:07.492 fused_ordering(977) 00:14:07.492 fused_ordering(978) 00:14:07.492 fused_ordering(979) 00:14:07.492 fused_ordering(980) 00:14:07.492 fused_ordering(981) 00:14:07.492 fused_ordering(982) 00:14:07.492 fused_ordering(983) 00:14:07.492 fused_ordering(984) 00:14:07.492 fused_ordering(985) 00:14:07.492 fused_ordering(986) 00:14:07.492 fused_ordering(987) 00:14:07.492 fused_ordering(988) 00:14:07.492 fused_ordering(989) 00:14:07.492 fused_ordering(990) 00:14:07.492 fused_ordering(991) 00:14:07.492 fused_ordering(992) 00:14:07.492 fused_ordering(993) 00:14:07.492 fused_ordering(994) 00:14:07.492 fused_ordering(995) 00:14:07.492 fused_ordering(996) 00:14:07.492 fused_ordering(997) 00:14:07.492 fused_ordering(998) 00:14:07.492 fused_ordering(999) 00:14:07.492 fused_ordering(1000) 00:14:07.492 fused_ordering(1001) 00:14:07.492 fused_ordering(1002) 00:14:07.492 fused_ordering(1003) 00:14:07.492 fused_ordering(1004) 00:14:07.492 fused_ordering(1005) 00:14:07.492 fused_ordering(1006) 00:14:07.492 fused_ordering(1007) 00:14:07.492 fused_ordering(1008) 00:14:07.492 fused_ordering(1009) 00:14:07.492 fused_ordering(1010) 00:14:07.492 fused_ordering(1011) 00:14:07.492 fused_ordering(1012) 00:14:07.492 fused_ordering(1013) 00:14:07.492 fused_ordering(1014) 00:14:07.492 fused_ordering(1015) 00:14:07.492 fused_ordering(1016) 00:14:07.492 fused_ordering(1017) 00:14:07.492 fused_ordering(1018) 00:14:07.492 fused_ordering(1019) 00:14:07.492 fused_ordering(1020) 00:14:07.492 fused_ordering(1021) 00:14:07.492 fused_ordering(1022) 00:14:07.492 fused_ordering(1023) 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.492 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.492 rmmod nvme_tcp 00:14:07.492 rmmod nvme_fabrics 00:14:07.492 rmmod nvme_keyring 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3592708 ']' 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3592708 ']' 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3592708' 00:14:07.794 killing process with pid 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3592708 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.794 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.795 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.795 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.795 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.795 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.795 19:52:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.338 00:14:10.338 real 0m13.452s 00:14:10.338 user 0m7.104s 00:14:10.338 sys 0m7.206s 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:10.338 ************************************ 00:14:10.338 END TEST nvmf_fused_ordering 00:14:10.338 ************************************ 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.338 ************************************ 00:14:10.338 START TEST nvmf_ns_masking 00:14:10.338 ************************************ 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:10.338 * Looking for test storage... 00:14:10.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.338 --rc genhtml_branch_coverage=1 00:14:10.338 --rc genhtml_function_coverage=1 00:14:10.338 --rc genhtml_legend=1 00:14:10.338 --rc geninfo_all_blocks=1 00:14:10.338 --rc geninfo_unexecuted_blocks=1 00:14:10.338 00:14:10.338 ' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.338 --rc genhtml_branch_coverage=1 00:14:10.338 --rc genhtml_function_coverage=1 00:14:10.338 --rc genhtml_legend=1 00:14:10.338 --rc geninfo_all_blocks=1 00:14:10.338 --rc geninfo_unexecuted_blocks=1 00:14:10.338 00:14:10.338 ' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.338 --rc genhtml_branch_coverage=1 00:14:10.338 --rc genhtml_function_coverage=1 00:14:10.338 --rc genhtml_legend=1 00:14:10.338 --rc geninfo_all_blocks=1 00:14:10.338 --rc geninfo_unexecuted_blocks=1 00:14:10.338 00:14:10.338 ' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.338 --rc genhtml_branch_coverage=1 00:14:10.338 --rc genhtml_function_coverage=1 00:14:10.338 --rc genhtml_legend=1 00:14:10.338 --rc geninfo_all_blocks=1 00:14:10.338 --rc geninfo_unexecuted_blocks=1 00:14:10.338 00:14:10.338 ' 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:10.338 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d1ac3983-5e49-4261-8b31-ec0986d135d6 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=fd551bd5-3e25-4cf6-be89-ca8f1fc9bf30 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6562d574-b306-40bf-ab4b-52970a282073 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.339 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:18.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:18.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:18.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:18.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:14:18.484 00:14:18.484 --- 10.0.0.2 ping statistics --- 00:14:18.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.484 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:14:18.484 00:14:18.484 --- 10.0.0.1 ping statistics --- 00:14:18.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.484 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3597499 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3597499 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3597499 ']' 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.484 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.484 [2024-11-26 19:52:18.466947] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:18.484 [2024-11-26 19:52:18.467018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.484 [2024-11-26 19:52:18.569517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.484 [2024-11-26 19:52:18.620797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.484 [2024-11-26 19:52:18.620848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.484 [2024-11-26 19:52:18.620857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.484 [2024-11-26 19:52:18.620865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.484 [2024-11-26 19:52:18.620871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.484 [2024-11-26 19:52:18.621640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.484 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.484 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:18.484 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.484 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.484 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.745 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.745 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.745 [2024-11-26 19:52:19.490168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.745 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:18.745 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:18.745 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.006 Malloc1 00:14:19.006 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:19.266 Malloc2 00:14:19.266 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.526 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:19.787 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.787 [2024-11-26 19:52:20.528723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.787 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:19.787 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6562d574-b306-40bf-ab4b-52970a282073 -a 10.0.0.2 -s 4420 -i 4 00:14:20.047 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.047 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.047 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.047 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:20.047 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.591 [ 0]:0x1 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acc67154408a41f8b2368606032889fa 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acc67154408a41f8b2368606032889fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.591 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.591 [ 0]:0x1 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acc67154408a41f8b2368606032889fa 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acc67154408a41f8b2368606032889fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.591 [ 1]:0x2 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:22.591 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.852 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.113 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:23.113 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:23.113 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6562d574-b306-40bf-ab4b-52970a282073 -a 10.0.0.2 -s 4420 -i 4 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:23.375 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.287 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.287 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.287 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:25.547 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.548 [ 0]:0x2 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.548 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.808 [ 0]:0x1 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.808 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acc67154408a41f8b2368606032889fa 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acc67154408a41f8b2368606032889fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.068 [ 1]:0x2 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.068 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.328 [ 0]:0x2 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:26.328 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.329 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:26.329 19:52:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.329 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.592 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:26.592 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6562d574-b306-40bf-ab4b-52970a282073 -a 10.0.0.2 -s 4420 -i 4 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:26.853 19:52:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.770 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.031 [ 0]:0x1 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acc67154408a41f8b2368606032889fa 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acc67154408a41f8b2368606032889fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.031 [ 1]:0x2 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.031 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.293 19:52:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.293 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.293 [ 0]:0x2 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:29.294 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.555 [2024-11-26 19:52:30.235446] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:29.555 request: 00:14:29.555 { 00:14:29.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.555 "nsid": 2, 00:14:29.555 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.555 "method": "nvmf_ns_remove_host", 00:14:29.555 "req_id": 1 00:14:29.555 } 00:14:29.555 Got JSON-RPC error response 00:14:29.555 response: 00:14:29.555 { 00:14:29.555 "code": -32602, 00:14:29.556 "message": "Invalid parameters" 00:14:29.556 } 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.556 [ 0]:0x2 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.556 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.816 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2c383c14d86a49a790fe4bb9c47e10fb 00:14:29.816 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2c383c14d86a49a790fe4bb9c47e10fb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.816 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:29.816 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.816 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3599998 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3599998 /var/tmp/host.sock 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3599998 ']' 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:29.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.817 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.817 [2024-11-26 19:52:30.505935] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:29.817 [2024-11-26 19:52:30.505988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599998 ] 00:14:29.817 [2024-11-26 19:52:30.593129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.817 [2024-11-26 19:52:30.628701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.759 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.759 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:30.759 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.759 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.019 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d1ac3983-5e49-4261-8b31-ec0986d135d6 00:14:31.019 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.019 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D1AC39835E4942618B31EC0986D135D6 -i 00:14:31.281 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid fd551bd5-3e25-4cf6-be89-ca8f1fc9bf30 00:14:31.281 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.281 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g FD551BD53E254CF6BE89CA8F1FC9BF30 -i 00:14:31.281 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.542 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:31.803 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.803 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:32.064 nvme0n1 00:14:32.064 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:32.064 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:32.635 nvme1n2 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:32.635 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:32.897 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d1ac3983-5e49-4261-8b31-ec0986d135d6 == \d\1\a\c\3\9\8\3\-\5\e\4\9\-\4\2\6\1\-\8\b\3\1\-\e\c\0\9\8\6\d\1\3\5\d\6 ]] 00:14:32.897 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:32.897 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:32.897 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:33.158 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ fd551bd5-3e25-4cf6-be89-ca8f1fc9bf30 == \f\d\5\5\1\b\d\5\-\3\e\2\5\-\4\c\f\6\-\b\e\8\9\-\c\a\8\f\1\f\c\9\b\f\3\0 ]] 00:14:33.158 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.158 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d1ac3983-5e49-4261-8b31-ec0986d135d6 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1AC39835E4942618B31EC0986D135D6 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1AC39835E4942618B31EC0986D135D6 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:33.419 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D1AC39835E4942618B31EC0986D135D6 00:14:33.681 [2024-11-26 19:52:34.266021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:33.681 [2024-11-26 19:52:34.266048] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:33.681 [2024-11-26 19:52:34.266055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.681 request: 00:14:33.681 { 00:14:33.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.681 "namespace": { 00:14:33.681 "bdev_name": "invalid", 00:14:33.681 "nsid": 1, 00:14:33.681 "nguid": "D1AC39835E4942618B31EC0986D135D6", 00:14:33.681 "no_auto_visible": false, 00:14:33.681 "hide_metadata": false 00:14:33.681 }, 00:14:33.681 "method": "nvmf_subsystem_add_ns", 00:14:33.681 "req_id": 1 00:14:33.681 } 00:14:33.681 Got JSON-RPC error response 00:14:33.681 response: 00:14:33.681 { 00:14:33.681 "code": -32602, 00:14:33.681 "message": "Invalid parameters" 00:14:33.681 } 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d1ac3983-5e49-4261-8b31-ec0986d135d6 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D1AC39835E4942618B31EC0986D135D6 -i 00:14:33.681 19:52:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3599998 ']' 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599998' 00:14:36.227 killing process with pid 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3599998 00:14:36.227 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:36.488 rmmod nvme_tcp 00:14:36.488 rmmod nvme_fabrics 00:14:36.488 rmmod nvme_keyring 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3597499 ']' 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3597499 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3597499 ']' 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3597499 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3597499 00:14:36.488 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.489 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.489 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3597499' 00:14:36.489 killing process with pid 3597499 00:14:36.489 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3597499 00:14:36.489 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3597499 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.750 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.662 00:14:38.662 real 0m28.761s 00:14:38.662 user 0m32.957s 00:14:38.662 sys 0m8.182s 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:38.662 ************************************ 00:14:38.662 END TEST nvmf_ns_masking 00:14:38.662 ************************************ 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.662 19:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.924 ************************************ 00:14:38.924 START TEST nvmf_nvme_cli 00:14:38.924 ************************************ 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:38.924 * Looking for test storage... 00:14:38.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:38.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.924 --rc genhtml_branch_coverage=1 00:14:38.924 --rc genhtml_function_coverage=1 00:14:38.924 --rc genhtml_legend=1 00:14:38.924 --rc geninfo_all_blocks=1 00:14:38.924 --rc geninfo_unexecuted_blocks=1 00:14:38.924 00:14:38.924 ' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:38.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.924 --rc genhtml_branch_coverage=1 00:14:38.924 --rc genhtml_function_coverage=1 00:14:38.924 --rc genhtml_legend=1 00:14:38.924 --rc geninfo_all_blocks=1 00:14:38.924 --rc geninfo_unexecuted_blocks=1 00:14:38.924 00:14:38.924 ' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:38.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.924 --rc genhtml_branch_coverage=1 00:14:38.924 --rc genhtml_function_coverage=1 00:14:38.924 --rc genhtml_legend=1 00:14:38.924 --rc geninfo_all_blocks=1 00:14:38.924 --rc geninfo_unexecuted_blocks=1 00:14:38.924 00:14:38.924 ' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:38.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.924 --rc genhtml_branch_coverage=1 00:14:38.924 --rc genhtml_function_coverage=1 00:14:38.924 --rc genhtml_legend=1 00:14:38.924 --rc geninfo_all_blocks=1 00:14:38.924 --rc geninfo_unexecuted_blocks=1 00:14:38.924 00:14:38.924 ' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.924 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:39.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:39.187 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:47.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:47.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:47.332 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:47.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:47.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.333 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:47.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:14:47.333 00:14:47.333 --- 10.0.0.2 ping statistics --- 00:14:47.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.333 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:14:47.333 00:14:47.333 --- 10.0.0.1 ping statistics --- 00:14:47.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.333 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3605707 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3605707 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3605707 ']' 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.333 [2024-11-26 19:52:47.265119] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:47.333 [2024-11-26 19:52:47.265191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.333 [2024-11-26 19:52:47.341333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.333 [2024-11-26 19:52:47.389506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.333 [2024-11-26 19:52:47.389560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.333 [2024-11-26 19:52:47.389567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.333 [2024-11-26 19:52:47.389572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.333 [2024-11-26 19:52:47.389577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.333 [2024-11-26 19:52:47.392384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.333 [2024-11-26 19:52:47.392508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.333 [2024-11-26 19:52:47.392670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.333 [2024-11-26 19:52:47.392670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.333 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 [2024-11-26 19:52:47.559127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 Malloc0 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 Malloc1 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 [2024-11-26 19:52:47.667460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:47.334 00:14:47.334 Discovery Log Number of Records 2, Generation counter 2 00:14:47.334 =====Discovery Log Entry 0====== 00:14:47.334 trtype: tcp 00:14:47.334 adrfam: ipv4 00:14:47.334 subtype: current discovery subsystem 00:14:47.334 treq: not required 00:14:47.334 portid: 0 00:14:47.334 trsvcid: 4420 00:14:47.334 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:47.334 traddr: 10.0.0.2 00:14:47.334 eflags: explicit discovery connections, duplicate discovery information 00:14:47.334 sectype: none 00:14:47.334 =====Discovery Log Entry 1====== 00:14:47.334 trtype: tcp 00:14:47.334 adrfam: ipv4 00:14:47.334 subtype: nvme subsystem 00:14:47.334 treq: not required 00:14:47.334 portid: 0 00:14:47.334 trsvcid: 4420 00:14:47.334 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:47.334 traddr: 10.0.0.2 00:14:47.334 eflags: none 00:14:47.334 sectype: none 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:47.334 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:48.725 19:52:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:51.274 /dev/nvme0n2 ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.274 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.275 rmmod nvme_tcp 00:14:51.275 rmmod nvme_fabrics 00:14:51.275 rmmod nvme_keyring 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3605707 ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3605707 ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3605707' 00:14:51.275 killing process with pid 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3605707 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.275 19:52:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:53.823 00:14:53.823 real 0m14.528s 00:14:53.823 user 0m20.353s 00:14:53.823 sys 0m6.293s 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.823 ************************************ 00:14:53.823 END TEST nvmf_nvme_cli 00:14:53.823 ************************************ 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.823 ************************************ 00:14:53.823 START TEST nvmf_vfio_user 00:14:53.823 ************************************ 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.823 * Looking for test storage... 00:14:53.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.823 --rc genhtml_branch_coverage=1 00:14:53.823 --rc genhtml_function_coverage=1 00:14:53.823 --rc genhtml_legend=1 00:14:53.823 --rc geninfo_all_blocks=1 00:14:53.823 --rc geninfo_unexecuted_blocks=1 00:14:53.823 00:14:53.823 ' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.823 --rc genhtml_branch_coverage=1 00:14:53.823 --rc genhtml_function_coverage=1 00:14:53.823 --rc genhtml_legend=1 00:14:53.823 --rc geninfo_all_blocks=1 00:14:53.823 --rc geninfo_unexecuted_blocks=1 00:14:53.823 00:14:53.823 ' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.823 --rc genhtml_branch_coverage=1 00:14:53.823 --rc genhtml_function_coverage=1 00:14:53.823 --rc genhtml_legend=1 00:14:53.823 --rc geninfo_all_blocks=1 00:14:53.823 --rc geninfo_unexecuted_blocks=1 00:14:53.823 00:14:53.823 ' 00:14:53.823 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.823 --rc genhtml_branch_coverage=1 00:14:53.823 --rc genhtml_function_coverage=1 00:14:53.823 --rc genhtml_legend=1 00:14:53.823 --rc geninfo_all_blocks=1 00:14:53.823 --rc geninfo_unexecuted_blocks=1 00:14:53.824 00:14:53.824 ' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3607190 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3607190' 00:14:53.824 Process pid: 3607190 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3607190 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:53.824 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3607190 ']' 00:14:53.825 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.825 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.825 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.825 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.825 19:52:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:53.825 [2024-11-26 19:52:54.423743] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:53.825 [2024-11-26 19:52:54.423822] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.825 [2024-11-26 19:52:54.511276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.825 [2024-11-26 19:52:54.545774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.825 [2024-11-26 19:52:54.545805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.825 [2024-11-26 19:52:54.545811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.825 [2024-11-26 19:52:54.545816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.825 [2024-11-26 19:52:54.545821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.825 [2024-11-26 19:52:54.547287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.825 [2024-11-26 19:52:54.547419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.825 [2024-11-26 19:52:54.547569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.825 [2024-11-26 19:52:54.547572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.768 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.768 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:54.768 19:52:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:55.711 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.971 Malloc1 00:14:55.971 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:56.232 19:52:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:56.232 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:56.493 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.493 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:56.493 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:56.754 Malloc2 00:14:56.754 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:57.015 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:57.015 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:57.278 19:52:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:57.278 [2024-11-26 19:52:57.965572] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:14:57.278 [2024-11-26 19:52:57.965606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607883 ] 00:14:57.278 [2024-11-26 19:52:58.004450] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:57.278 [2024-11-26 19:52:58.009768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.278 [2024-11-26 19:52:58.009785] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4258379000 00:14:57.278 [2024-11-26 19:52:58.010768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.011771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.012775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.013777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.014779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.015779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.016786] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.017793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.278 [2024-11-26 19:52:58.018804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.278 [2024-11-26 19:52:58.018812] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f425836e000 00:14:57.278 [2024-11-26 19:52:58.019724] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.278 [2024-11-26 19:52:58.029173] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:57.278 [2024-11-26 19:52:58.029192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:57.278 [2024-11-26 19:52:58.034898] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.278 [2024-11-26 19:52:58.034931] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:57.278 [2024-11-26 19:52:58.034993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:57.278 [2024-11-26 19:52:58.035004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:57.278 [2024-11-26 19:52:58.035008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:57.278 [2024-11-26 19:52:58.035902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:57.278 [2024-11-26 19:52:58.035910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:57.278 [2024-11-26 19:52:58.035916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:57.278 [2024-11-26 19:52:58.036906] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.278 [2024-11-26 19:52:58.036912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:57.278 [2024-11-26 19:52:58.036917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.037914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:57.278 [2024-11-26 19:52:58.037920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.038910] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:57.278 [2024-11-26 19:52:58.038916] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:57.278 [2024-11-26 19:52:58.038919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.038924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.039032] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:57.278 [2024-11-26 19:52:58.039035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.039039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:57.278 [2024-11-26 19:52:58.039920] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:57.278 [2024-11-26 19:52:58.040929] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:57.278 [2024-11-26 19:52:58.041934] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.278 [2024-11-26 19:52:58.042935] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.278 [2024-11-26 19:52:58.042987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.278 [2024-11-26 19:52:58.043948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:57.278 [2024-11-26 19:52:58.043954] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.278 [2024-11-26 19:52:58.043957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:57.278 [2024-11-26 19:52:58.043972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:57.278 [2024-11-26 19:52:58.043981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.278 [2024-11-26 19:52:58.043994] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.278 [2024-11-26 19:52:58.043998] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.278 [2024-11-26 19:52:58.044000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.278 [2024-11-26 19:52:58.044010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.278 [2024-11-26 19:52:58.044049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044056] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:57.279 [2024-11-26 19:52:58.044059] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:57.279 [2024-11-26 19:52:58.044062] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:57.279 [2024-11-26 19:52:58.044065] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:57.279 [2024-11-26 19:52:58.044069] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:57.279 [2024-11-26 19:52:58.044072] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:57.279 [2024-11-26 19:52:58.044075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.279 [2024-11-26 19:52:58.044116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.279 [2024-11-26 19:52:58.044122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.279 [2024-11-26 19:52:58.044128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.279 [2024-11-26 19:52:58.044131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044156] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:57.279 [2024-11-26 19:52:58.044163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044248] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:57.279 [2024-11-26 19:52:58.044251] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:57.279 [2024-11-26 19:52:58.044254] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044276] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:57.279 [2024-11-26 19:52:58.044283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.279 [2024-11-26 19:52:58.044297] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.279 [2024-11-26 19:52:58.044299] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044341] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.279 [2024-11-26 19:52:58.044344] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.279 [2024-11-26 19:52:58.044346] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044392] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.279 [2024-11-26 19:52:58.044395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:57.279 [2024-11-26 19:52:58.044399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:57.279 [2024-11-26 19:52:58.044412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:57.279 [2024-11-26 19:52:58.044483] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:57.279 [2024-11-26 19:52:58.044487] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:57.279 [2024-11-26 19:52:58.044489] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:57.279 [2024-11-26 19:52:58.044492] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:57.279 [2024-11-26 19:52:58.044494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:57.279 [2024-11-26 19:52:58.044499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:57.279 [2024-11-26 19:52:58.044504] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:57.279 [2024-11-26 19:52:58.044507] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:57.279 [2024-11-26 19:52:58.044509] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044519] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:57.279 [2024-11-26 19:52:58.044522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.279 [2024-11-26 19:52:58.044524] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:57.279 [2024-11-26 19:52:58.044537] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:57.279 [2024-11-26 19:52:58.044539] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.279 [2024-11-26 19:52:58.044544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:57.279 [2024-11-26 19:52:58.044549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:57.280 ===================================================== 00:14:57.280 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.280 ===================================================== 00:14:57.280 Controller Capabilities/Features 00:14:57.280 ================================ 00:14:57.280 Vendor ID: 4e58 00:14:57.280 Subsystem Vendor ID: 4e58 00:14:57.280 Serial Number: SPDK1 00:14:57.280 Model Number: SPDK bdev Controller 00:14:57.280 Firmware Version: 25.01 00:14:57.280 Recommended Arb Burst: 6 00:14:57.280 IEEE OUI Identifier: 8d 6b 50 00:14:57.280 Multi-path I/O 00:14:57.280 May have multiple subsystem ports: Yes 00:14:57.280 May have multiple controllers: Yes 00:14:57.280 Associated with SR-IOV VF: No 00:14:57.280 Max Data Transfer Size: 131072 00:14:57.280 Max Number of Namespaces: 32 00:14:57.280 Max Number of I/O Queues: 127 00:14:57.280 NVMe Specification Version (VS): 1.3 00:14:57.280 NVMe Specification Version (Identify): 1.3 00:14:57.280 Maximum Queue Entries: 256 00:14:57.280 Contiguous Queues Required: Yes 00:14:57.280 Arbitration Mechanisms Supported 00:14:57.280 Weighted Round Robin: Not Supported 00:14:57.280 Vendor Specific: Not Supported 00:14:57.280 Reset Timeout: 15000 ms 00:14:57.280 Doorbell Stride: 4 bytes 00:14:57.280 NVM Subsystem Reset: Not Supported 00:14:57.280 Command Sets Supported 00:14:57.280 NVM Command Set: Supported 00:14:57.280 Boot Partition: Not Supported 00:14:57.280 Memory Page Size Minimum: 4096 bytes 00:14:57.280 Memory Page Size Maximum: 4096 bytes 00:14:57.280 Persistent Memory Region: Not Supported 00:14:57.280 Optional Asynchronous Events Supported 00:14:57.280 Namespace Attribute Notices: Supported 00:14:57.280 Firmware Activation Notices: Not Supported 00:14:57.280 ANA Change Notices: Not Supported 00:14:57.280 PLE Aggregate Log Change Notices: Not Supported 00:14:57.280 LBA Status Info Alert Notices: Not Supported 00:14:57.280 EGE Aggregate Log Change Notices: Not Supported 00:14:57.280 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.280 Zone Descriptor Change Notices: Not Supported 00:14:57.280 Discovery Log Change Notices: Not Supported 00:14:57.280 Controller Attributes 00:14:57.280 128-bit Host Identifier: Supported 00:14:57.280 Non-Operational Permissive Mode: Not Supported 00:14:57.280 NVM Sets: Not Supported 00:14:57.280 Read Recovery Levels: Not Supported 00:14:57.280 Endurance Groups: Not Supported 00:14:57.280 Predictable Latency Mode: Not Supported 00:14:57.280 Traffic Based Keep ALive: Not Supported 00:14:57.280 Namespace Granularity: Not Supported 00:14:57.280 SQ Associations: Not Supported 00:14:57.280 UUID List: Not Supported 00:14:57.280 Multi-Domain Subsystem: Not Supported 00:14:57.280 Fixed Capacity Management: Not Supported 00:14:57.280 Variable Capacity Management: Not Supported 00:14:57.280 Delete Endurance Group: Not Supported 00:14:57.280 Delete NVM Set: Not Supported 00:14:57.280 Extended LBA Formats Supported: Not Supported 00:14:57.280 Flexible Data Placement Supported: Not Supported 00:14:57.280 00:14:57.280 Controller Memory Buffer Support 00:14:57.280 ================================ 00:14:57.280 Supported: No 00:14:57.280 00:14:57.280 Persistent Memory Region Support 00:14:57.280 ================================ 00:14:57.280 Supported: No 00:14:57.280 00:14:57.280 Admin Command Set Attributes 00:14:57.280 ============================ 00:14:57.280 Security Send/Receive: Not Supported 00:14:57.280 Format NVM: Not Supported 00:14:57.280 Firmware Activate/Download: Not Supported 00:14:57.280 Namespace Management: Not Supported 00:14:57.280 Device Self-Test: Not Supported 00:14:57.280 Directives: Not Supported 00:14:57.280 NVMe-MI: Not Supported 00:14:57.280 Virtualization Management: Not Supported 00:14:57.280 Doorbell Buffer Config: Not Supported 00:14:57.280 Get LBA Status Capability: Not Supported 00:14:57.280 Command & Feature Lockdown Capability: Not Supported 00:14:57.280 Abort Command Limit: 4 00:14:57.280 Async Event Request Limit: 4 00:14:57.280 Number of Firmware Slots: N/A 00:14:57.280 Firmware Slot 1 Read-Only: N/A 00:14:57.280 Firmware Activation Without Reset: N/A 00:14:57.280 Multiple Update Detection Support: N/A 00:14:57.280 Firmware Update Granularity: No Information Provided 00:14:57.280 Per-Namespace SMART Log: No 00:14:57.280 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.280 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:57.280 Command Effects Log Page: Supported 00:14:57.280 Get Log Page Extended Data: Supported 00:14:57.280 Telemetry Log Pages: Not Supported 00:14:57.280 Persistent Event Log Pages: Not Supported 00:14:57.280 Supported Log Pages Log Page: May Support 00:14:57.280 Commands Supported & Effects Log Page: Not Supported 00:14:57.280 Feature Identifiers & Effects Log Page:May Support 00:14:57.280 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.280 Data Area 4 for Telemetry Log: Not Supported 00:14:57.280 Error Log Page Entries Supported: 128 00:14:57.280 Keep Alive: Supported 00:14:57.280 Keep Alive Granularity: 10000 ms 00:14:57.280 00:14:57.280 NVM Command Set Attributes 00:14:57.280 ========================== 00:14:57.280 Submission Queue Entry Size 00:14:57.280 Max: 64 00:14:57.280 Min: 64 00:14:57.280 Completion Queue Entry Size 00:14:57.280 Max: 16 00:14:57.280 Min: 16 00:14:57.280 Number of Namespaces: 32 00:14:57.280 Compare Command: Supported 00:14:57.280 Write Uncorrectable Command: Not Supported 00:14:57.280 Dataset Management Command: Supported 00:14:57.280 Write Zeroes Command: Supported 00:14:57.280 Set Features Save Field: Not Supported 00:14:57.280 Reservations: Not Supported 00:14:57.280 Timestamp: Not Supported 00:14:57.280 Copy: Supported 00:14:57.280 Volatile Write Cache: Present 00:14:57.280 Atomic Write Unit (Normal): 1 00:14:57.280 Atomic Write Unit (PFail): 1 00:14:57.280 Atomic Compare & Write Unit: 1 00:14:57.280 Fused Compare & Write: Supported 00:14:57.280 Scatter-Gather List 00:14:57.280 SGL Command Set: Supported (Dword aligned) 00:14:57.280 SGL Keyed: Not Supported 00:14:57.280 SGL Bit Bucket Descriptor: Not Supported 00:14:57.280 SGL Metadata Pointer: Not Supported 00:14:57.280 Oversized SGL: Not Supported 00:14:57.280 SGL Metadata Address: Not Supported 00:14:57.280 SGL Offset: Not Supported 00:14:57.280 Transport SGL Data Block: Not Supported 00:14:57.280 Replay Protected Memory Block: Not Supported 00:14:57.280 00:14:57.280 Firmware Slot Information 00:14:57.280 ========================= 00:14:57.280 Active slot: 1 00:14:57.280 Slot 1 Firmware Revision: 25.01 00:14:57.280 00:14:57.280 00:14:57.280 Commands Supported and Effects 00:14:57.280 ============================== 00:14:57.280 Admin Commands 00:14:57.280 -------------- 00:14:57.280 Get Log Page (02h): Supported 00:14:57.280 Identify (06h): Supported 00:14:57.280 Abort (08h): Supported 00:14:57.280 Set Features (09h): Supported 00:14:57.280 Get Features (0Ah): Supported 00:14:57.280 Asynchronous Event Request (0Ch): Supported 00:14:57.280 Keep Alive (18h): Supported 00:14:57.280 I/O Commands 00:14:57.280 ------------ 00:14:57.280 Flush (00h): Supported LBA-Change 00:14:57.280 Write (01h): Supported LBA-Change 00:14:57.280 Read (02h): Supported 00:14:57.280 Compare (05h): Supported 00:14:57.280 Write Zeroes (08h): Supported LBA-Change 00:14:57.280 Dataset Management (09h): Supported LBA-Change 00:14:57.280 Copy (19h): Supported LBA-Change 00:14:57.280 00:14:57.280 Error Log 00:14:57.280 ========= 00:14:57.280 00:14:57.280 Arbitration 00:14:57.280 =========== 00:14:57.280 Arbitration Burst: 1 00:14:57.280 00:14:57.280 Power Management 00:14:57.280 ================ 00:14:57.280 Number of Power States: 1 00:14:57.280 Current Power State: Power State #0 00:14:57.280 Power State #0: 00:14:57.280 Max Power: 0.00 W 00:14:57.280 Non-Operational State: Operational 00:14:57.280 Entry Latency: Not Reported 00:14:57.280 Exit Latency: Not Reported 00:14:57.280 Relative Read Throughput: 0 00:14:57.280 Relative Read Latency: 0 00:14:57.280 Relative Write Throughput: 0 00:14:57.280 Relative Write Latency: 0 00:14:57.280 Idle Power: Not Reported 00:14:57.280 Active Power: Not Reported 00:14:57.280 Non-Operational Permissive Mode: Not Supported 00:14:57.280 00:14:57.280 Health Information 00:14:57.280 ================== 00:14:57.280 Critical Warnings: 00:14:57.280 Available Spare Space: OK 00:14:57.280 Temperature: OK 00:14:57.280 Device Reliability: OK 00:14:57.280 Read Only: No 00:14:57.280 Volatile Memory Backup: OK 00:14:57.280 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:57.280 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:57.280 Available Spare: 0% 00:14:57.280 Available Sp[2024-11-26 19:52:58.044643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:57.280 [2024-11-26 19:52:58.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044672] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:57.280 [2024-11-26 19:52:58.044679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.280 [2024-11-26 19:52:58.044952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.281 [2024-11-26 19:52:58.044960] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:57.281 [2024-11-26 19:52:58.045949] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.281 [2024-11-26 19:52:58.045987] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:57.281 [2024-11-26 19:52:58.045992] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:57.281 [2024-11-26 19:52:58.046957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:57.281 [2024-11-26 19:52:58.046965] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:57.281 [2024-11-26 19:52:58.047020] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:57.281 [2024-11-26 19:52:58.049163] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.281 are Threshold: 0% 00:14:57.281 Life Percentage Used: 0% 00:14:57.281 Data Units Read: 0 00:14:57.281 Data Units Written: 0 00:14:57.281 Host Read Commands: 0 00:14:57.281 Host Write Commands: 0 00:14:57.281 Controller Busy Time: 0 minutes 00:14:57.281 Power Cycles: 0 00:14:57.281 Power On Hours: 0 hours 00:14:57.281 Unsafe Shutdowns: 0 00:14:57.281 Unrecoverable Media Errors: 0 00:14:57.281 Lifetime Error Log Entries: 0 00:14:57.281 Warning Temperature Time: 0 minutes 00:14:57.281 Critical Temperature Time: 0 minutes 00:14:57.281 00:14:57.281 Number of Queues 00:14:57.281 ================ 00:14:57.281 Number of I/O Submission Queues: 127 00:14:57.281 Number of I/O Completion Queues: 127 00:14:57.281 00:14:57.281 Active Namespaces 00:14:57.281 ================= 00:14:57.281 Namespace ID:1 00:14:57.281 Error Recovery Timeout: Unlimited 00:14:57.281 Command Set Identifier: NVM (00h) 00:14:57.281 Deallocate: Supported 00:14:57.281 Deallocated/Unwritten Error: Not Supported 00:14:57.281 Deallocated Read Value: Unknown 00:14:57.281 Deallocate in Write Zeroes: Not Supported 00:14:57.281 Deallocated Guard Field: 0xFFFF 00:14:57.281 Flush: Supported 00:14:57.281 Reservation: Supported 00:14:57.281 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.281 Size (in LBAs): 131072 (0GiB) 00:14:57.281 Capacity (in LBAs): 131072 (0GiB) 00:14:57.281 Utilization (in LBAs): 131072 (0GiB) 00:14:57.281 NGUID: 38EDC2FF501A4C4B977713080632D40E 00:14:57.281 UUID: 38edc2ff-501a-4c4b-9777-13080632d40e 00:14:57.281 Thin Provisioning: Not Supported 00:14:57.281 Per-NS Atomic Units: Yes 00:14:57.281 Atomic Boundary Size (Normal): 0 00:14:57.281 Atomic Boundary Size (PFail): 0 00:14:57.281 Atomic Boundary Offset: 0 00:14:57.281 Maximum Single Source Range Length: 65535 00:14:57.281 Maximum Copy Length: 65535 00:14:57.281 Maximum Source Range Count: 1 00:14:57.281 NGUID/EUI64 Never Reused: No 00:14:57.281 Namespace Write Protected: No 00:14:57.281 Number of LBA Formats: 1 00:14:57.281 Current LBA Format: LBA Format #00 00:14:57.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.281 00:14:57.281 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:57.542 [2024-11-26 19:52:58.238841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.850 Initializing NVMe Controllers 00:15:02.850 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.850 Initialization complete. Launching workers. 00:15:02.850 ======================================================== 00:15:02.850 Latency(us) 00:15:02.850 Device Information : IOPS MiB/s Average min max 00:15:02.850 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40066.40 156.51 3197.35 868.20 8688.23 00:15:02.850 ======================================================== 00:15:02.850 Total : 40066.40 156.51 3197.35 868.20 8688.23 00:15:02.850 00:15:02.850 [2024-11-26 19:53:03.262764] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.850 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:02.850 [2024-11-26 19:53:03.452602] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.139 Initializing NVMe Controllers 00:15:08.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:08.139 Initialization complete. Launching workers. 00:15:08.139 ======================================================== 00:15:08.139 Latency(us) 00:15:08.139 Device Information : IOPS MiB/s Average min max 00:15:08.139 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7996.64 5986.43 15962.69 00:15:08.139 ======================================================== 00:15:08.139 Total : 16025.60 62.60 7996.64 5986.43 15962.69 00:15:08.139 00:15:08.139 [2024-11-26 19:53:08.488812] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.139 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:08.139 [2024-11-26 19:53:08.691711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.435 [2024-11-26 19:53:13.767359] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.435 Initializing NVMe Controllers 00:15:13.435 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.435 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.435 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:13.435 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:13.435 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:13.435 Initialization complete. Launching workers. 00:15:13.435 Starting thread on core 2 00:15:13.435 Starting thread on core 3 00:15:13.435 Starting thread on core 1 00:15:13.435 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:13.435 [2024-11-26 19:53:14.024394] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.902 [2024-11-26 19:53:17.078600] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.902 Initializing NVMe Controllers 00:15:16.902 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.902 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.902 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:16.902 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:16.902 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:16.902 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:16.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:16.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:16.902 Initialization complete. Launching workers. 00:15:16.902 Starting thread on core 1 with urgent priority queue 00:15:16.902 Starting thread on core 2 with urgent priority queue 00:15:16.902 Starting thread on core 3 with urgent priority queue 00:15:16.902 Starting thread on core 0 with urgent priority queue 00:15:16.902 SPDK bdev Controller (SPDK1 ) core 0: 10104.00 IO/s 9.90 secs/100000 ios 00:15:16.902 SPDK bdev Controller (SPDK1 ) core 1: 11413.33 IO/s 8.76 secs/100000 ios 00:15:16.902 SPDK bdev Controller (SPDK1 ) core 2: 12015.00 IO/s 8.32 secs/100000 ios 00:15:16.902 SPDK bdev Controller (SPDK1 ) core 3: 11297.00 IO/s 8.85 secs/100000 ios 00:15:16.902 ======================================================== 00:15:16.902 00:15:16.902 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.902 [2024-11-26 19:53:17.315612] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.902 Initializing NVMe Controllers 00:15:16.902 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.902 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.902 Namespace ID: 1 size: 0GB 00:15:16.902 Initialization complete. 00:15:16.902 INFO: using host memory buffer for IO 00:15:16.902 Hello world! 00:15:16.902 [2024-11-26 19:53:17.349831] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.902 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.902 [2024-11-26 19:53:17.581234] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.845 Initializing NVMe Controllers 00:15:17.845 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.845 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.845 Initialization complete. Launching workers. 00:15:17.845 submit (in ns) avg, min, max = 5994.1, 2820.8, 3997608.3 00:15:17.846 complete (in ns) avg, min, max = 17808.6, 1635.0, 7985810.8 00:15:17.846 00:15:17.846 Submit histogram 00:15:17.846 ================ 00:15:17.846 Range in us Cumulative Count 00:15:17.846 2.813 - 2.827: 0.0404% ( 8) 00:15:17.846 2.827 - 2.840: 0.5756% ( 106) 00:15:17.846 2.840 - 2.853: 2.5547% ( 392) 00:15:17.846 2.853 - 2.867: 6.3664% ( 755) 00:15:17.846 2.867 - 2.880: 11.1728% ( 952) 00:15:17.846 2.880 - 2.893: 17.3121% ( 1216) 00:15:17.846 2.893 - 2.907: 23.1635% ( 1159) 00:15:17.846 2.907 - 2.920: 28.5960% ( 1076) 00:15:17.846 2.920 - 2.933: 34.9371% ( 1256) 00:15:17.846 2.933 - 2.947: 40.2080% ( 1044) 00:15:17.846 2.947 - 2.960: 45.3527% ( 1019) 00:15:17.846 2.960 - 2.973: 51.5727% ( 1232) 00:15:17.846 2.973 - 2.987: 59.7264% ( 1615) 00:15:17.846 2.987 - 3.000: 68.8242% ( 1802) 00:15:17.846 3.000 - 3.013: 78.5631% ( 1929) 00:15:17.846 3.013 - 3.027: 84.9447% ( 1264) 00:15:17.846 3.027 - 3.040: 90.9375% ( 1187) 00:15:17.846 3.040 - 3.053: 95.3097% ( 866) 00:15:17.846 3.053 - 3.067: 97.5817% ( 450) 00:15:17.846 3.067 - 3.080: 98.7681% ( 235) 00:15:17.846 3.080 - 3.093: 99.2427% ( 94) 00:15:17.846 3.093 - 3.107: 99.4547% ( 42) 00:15:17.846 3.107 - 3.120: 99.5254% ( 14) 00:15:17.846 3.120 - 3.133: 99.5658% ( 8) 00:15:17.846 3.133 - 3.147: 99.5709% ( 1) 00:15:17.846 3.173 - 3.187: 99.5759% ( 1) 00:15:17.846 3.360 - 3.373: 99.5810% ( 1) 00:15:17.846 3.440 - 3.467: 99.5860% ( 1) 00:15:17.846 3.467 - 3.493: 99.5911% ( 1) 00:15:17.846 3.947 - 3.973: 99.5961% ( 1) 00:15:17.846 4.027 - 4.053: 99.6062% ( 2) 00:15:17.846 4.107 - 4.133: 99.6112% ( 1) 00:15:17.846 4.400 - 4.427: 99.6163% ( 1) 00:15:17.846 4.427 - 4.453: 99.6213% ( 1) 00:15:17.846 4.453 - 4.480: 99.6264% ( 1) 00:15:17.846 4.507 - 4.533: 99.6314% ( 1) 00:15:17.846 4.613 - 4.640: 99.6365% ( 1) 00:15:17.846 4.640 - 4.667: 99.6415% ( 1) 00:15:17.846 4.693 - 4.720: 99.6466% ( 1) 00:15:17.846 4.720 - 4.747: 99.6516% ( 1) 00:15:17.846 4.747 - 4.773: 99.6567% ( 1) 00:15:17.846 4.907 - 4.933: 99.6617% ( 1) 00:15:17.846 4.933 - 4.960: 99.6668% ( 1) 00:15:17.846 4.987 - 5.013: 99.6769% ( 2) 00:15:17.846 5.013 - 5.040: 99.6819% ( 1) 00:15:17.846 5.040 - 5.067: 99.6920% ( 2) 00:15:17.846 5.067 - 5.093: 99.6971% ( 1) 00:15:17.846 5.093 - 5.120: 99.7021% ( 1) 00:15:17.846 5.120 - 5.147: 99.7072% ( 1) 00:15:17.846 5.227 - 5.253: 99.7173% ( 2) 00:15:17.846 5.307 - 5.333: 99.7274% ( 2) 00:15:17.846 5.360 - 5.387: 99.7324% ( 1) 00:15:17.846 5.387 - 5.413: 99.7425% ( 2) 00:15:17.846 5.467 - 5.493: 99.7476% ( 1) 00:15:17.846 5.493 - 5.520: 99.7627% ( 3) 00:15:17.846 5.520 - 5.547: 99.7678% ( 1) 00:15:17.846 5.600 - 5.627: 99.7728% ( 1) 00:15:17.846 5.680 - 5.707: 99.7829% ( 2) 00:15:17.846 5.707 - 5.733: 99.7880% ( 1) 00:15:17.846 5.733 - 5.760: 99.7930% ( 1) 00:15:17.846 5.760 - 5.787: 99.7981% ( 1) 00:15:17.846 5.813 - 5.840: 99.8031% ( 1) 00:15:17.846 5.840 - 5.867: 99.8081% ( 1) 00:15:17.846 5.867 - 5.893: 99.8182% ( 2) 00:15:17.846 5.947 - 5.973: 99.8233% ( 1) 00:15:17.846 6.000 - 6.027: 99.8334% ( 2) 00:15:17.846 6.053 - 6.080: 99.8435% ( 2) 00:15:17.846 6.080 - 6.107: 99.8536% ( 2) 00:15:17.846 6.160 - 6.187: 99.8586% ( 1) 00:15:17.846 6.187 - 6.213: 99.8637% ( 1) 00:15:17.846 6.213 - 6.240: 99.8687% ( 1) 00:15:17.846 6.267 - 6.293: 99.8738% ( 1) 00:15:17.846 6.347 - 6.373: 99.8839% ( 2) 00:15:17.846 6.373 - 6.400: 99.8889% ( 1) 00:15:17.846 6.587 - 6.613: 99.8940% ( 1) 00:15:17.846 6.640 - 6.667: 99.9041% ( 2) 00:15:17.846 6.827 - 6.880: 99.9091% ( 1) 00:15:17.846 6.880 - 6.933: 99.9142% ( 1) 00:15:17.846 6.933 - 6.987: 99.9192% ( 1) 00:15:17.846 [2024-11-26 19:53:18.595736] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.846 7.200 - 7.253: 99.9243% ( 1) 00:15:17.846 3986.773 - 4014.080: 100.0000% ( 15) 00:15:17.846 00:15:17.846 Complete histogram 00:15:17.846 ================== 00:15:17.846 Range in us Cumulative Count 00:15:17.846 1.633 - 1.640: 0.5150% ( 102) 00:15:17.846 1.640 - 1.647: 1.0198% ( 100) 00:15:17.846 1.647 - 1.653: 1.0956% ( 15) 00:15:17.846 1.653 - 1.660: 1.1864% ( 18) 00:15:17.846 1.660 - 1.667: 1.2521% ( 13) 00:15:17.846 1.667 - 1.673: 1.2723% ( 4) 00:15:17.846 1.673 - 1.680: 1.2824% ( 2) 00:15:17.846 1.680 - 1.687: 7.6892% ( 1269) 00:15:17.846 1.687 - 1.693: 47.4024% ( 7866) 00:15:17.846 1.693 - 1.700: 57.3939% ( 1979) 00:15:17.846 1.700 - 1.707: 66.9763% ( 1898) 00:15:17.846 1.707 - 1.720: 80.9209% ( 2762) 00:15:17.846 1.720 - 1.733: 84.0107% ( 612) 00:15:17.846 1.733 - 1.747: 84.9952% ( 195) 00:15:17.846 1.747 - 1.760: 89.6501% ( 922) 00:15:17.846 1.760 - 1.773: 95.2189% ( 1103) 00:15:17.846 1.773 - 1.787: 98.1219% ( 575) 00:15:17.846 1.787 - 1.800: 99.2174% ( 217) 00:15:17.846 1.800 - 1.813: 99.4295% ( 42) 00:15:17.846 1.813 - 1.827: 99.4749% ( 9) 00:15:17.846 1.827 - 1.840: 99.4850% ( 2) 00:15:17.846 1.840 - 1.853: 99.4901% ( 1) 00:15:17.846 3.293 - 3.307: 99.4951% ( 1) 00:15:17.846 3.467 - 3.493: 99.5002% ( 1) 00:15:17.846 3.600 - 3.627: 99.5052% ( 1) 00:15:17.846 3.840 - 3.867: 99.5103% ( 1) 00:15:17.846 3.867 - 3.893: 99.5153% ( 1) 00:15:17.846 3.920 - 3.947: 99.5204% ( 1) 00:15:17.846 3.973 - 4.000: 99.5254% ( 1) 00:15:17.846 4.187 - 4.213: 99.5305% ( 1) 00:15:17.846 4.320 - 4.347: 99.5355% ( 1) 00:15:17.846 4.480 - 4.507: 99.5456% ( 2) 00:15:17.846 4.747 - 4.773: 99.5507% ( 1) 00:15:17.846 4.880 - 4.907: 99.5557% ( 1) 00:15:17.846 4.907 - 4.933: 99.5608% ( 1) 00:15:17.846 5.040 - 5.067: 99.5658% ( 1) 00:15:17.846 5.387 - 5.413: 99.5759% ( 2) 00:15:17.846 5.413 - 5.440: 99.5810% ( 1) 00:15:17.846 5.600 - 5.627: 99.5860% ( 1) 00:15:17.846 9.653 - 9.707: 99.5911% ( 1) 00:15:17.846 11.253 - 11.307: 99.5961% ( 1) 00:15:17.846 33.067 - 33.280: 99.6012% ( 1) 00:15:17.846 1297.067 - 1303.893: 99.6062% ( 1) 00:15:17.846 3986.773 - 4014.080: 99.9899% ( 76) 00:15:17.846 5980.160 - 6007.467: 99.9950% ( 1) 00:15:17.846 7973.547 - 8028.160: 100.0000% ( 1) 00:15:17.846 00:15:17.846 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:17.846 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:17.846 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:17.846 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:17.846 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.108 [ 00:15:18.108 { 00:15:18.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.108 "subtype": "Discovery", 00:15:18.108 "listen_addresses": [], 00:15:18.108 "allow_any_host": true, 00:15:18.108 "hosts": [] 00:15:18.108 }, 00:15:18.108 { 00:15:18.108 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.108 "subtype": "NVMe", 00:15:18.108 "listen_addresses": [ 00:15:18.108 { 00:15:18.108 "trtype": "VFIOUSER", 00:15:18.108 "adrfam": "IPv4", 00:15:18.108 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.108 "trsvcid": "0" 00:15:18.108 } 00:15:18.108 ], 00:15:18.108 "allow_any_host": true, 00:15:18.108 "hosts": [], 00:15:18.108 "serial_number": "SPDK1", 00:15:18.108 "model_number": "SPDK bdev Controller", 00:15:18.108 "max_namespaces": 32, 00:15:18.108 "min_cntlid": 1, 00:15:18.108 "max_cntlid": 65519, 00:15:18.108 "namespaces": [ 00:15:18.108 { 00:15:18.108 "nsid": 1, 00:15:18.108 "bdev_name": "Malloc1", 00:15:18.108 "name": "Malloc1", 00:15:18.108 "nguid": "38EDC2FF501A4C4B977713080632D40E", 00:15:18.108 "uuid": "38edc2ff-501a-4c4b-9777-13080632d40e" 00:15:18.108 } 00:15:18.108 ] 00:15:18.108 }, 00:15:18.108 { 00:15:18.108 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.108 "subtype": "NVMe", 00:15:18.108 "listen_addresses": [ 00:15:18.108 { 00:15:18.108 "trtype": "VFIOUSER", 00:15:18.108 "adrfam": "IPv4", 00:15:18.108 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.108 "trsvcid": "0" 00:15:18.108 } 00:15:18.108 ], 00:15:18.108 "allow_any_host": true, 00:15:18.108 "hosts": [], 00:15:18.108 "serial_number": "SPDK2", 00:15:18.108 "model_number": "SPDK bdev Controller", 00:15:18.108 "max_namespaces": 32, 00:15:18.108 "min_cntlid": 1, 00:15:18.108 "max_cntlid": 65519, 00:15:18.108 "namespaces": [ 00:15:18.108 { 00:15:18.108 "nsid": 1, 00:15:18.108 "bdev_name": "Malloc2", 00:15:18.108 "name": "Malloc2", 00:15:18.108 "nguid": "1D818864F5C14451BF92073AE81E5584", 00:15:18.108 "uuid": "1d818864-f5c1-4451-bf92-073ae81e5584" 00:15:18.108 } 00:15:18.108 ] 00:15:18.108 } 00:15:18.108 ] 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3611918 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:18.108 19:53:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:18.369 [2024-11-26 19:53:18.976354] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.369 Malloc3 00:15:18.369 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:18.369 [2024-11-26 19:53:19.170674] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.629 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.629 Asynchronous Event Request test 00:15:18.629 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.629 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.629 Registering asynchronous event callbacks... 00:15:18.629 Starting namespace attribute notice tests for all controllers... 00:15:18.629 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:18.629 aer_cb - Changed Namespace 00:15:18.629 Cleaning up... 00:15:18.629 [ 00:15:18.629 { 00:15:18.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.630 "subtype": "Discovery", 00:15:18.630 "listen_addresses": [], 00:15:18.630 "allow_any_host": true, 00:15:18.630 "hosts": [] 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.630 "subtype": "NVMe", 00:15:18.630 "listen_addresses": [ 00:15:18.630 { 00:15:18.630 "trtype": "VFIOUSER", 00:15:18.630 "adrfam": "IPv4", 00:15:18.630 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.630 "trsvcid": "0" 00:15:18.630 } 00:15:18.630 ], 00:15:18.630 "allow_any_host": true, 00:15:18.630 "hosts": [], 00:15:18.630 "serial_number": "SPDK1", 00:15:18.630 "model_number": "SPDK bdev Controller", 00:15:18.630 "max_namespaces": 32, 00:15:18.630 "min_cntlid": 1, 00:15:18.630 "max_cntlid": 65519, 00:15:18.630 "namespaces": [ 00:15:18.630 { 00:15:18.630 "nsid": 1, 00:15:18.630 "bdev_name": "Malloc1", 00:15:18.630 "name": "Malloc1", 00:15:18.630 "nguid": "38EDC2FF501A4C4B977713080632D40E", 00:15:18.630 "uuid": "38edc2ff-501a-4c4b-9777-13080632d40e" 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nsid": 2, 00:15:18.630 "bdev_name": "Malloc3", 00:15:18.630 "name": "Malloc3", 00:15:18.630 "nguid": "A4B7F77557724CD7B201CFA924087418", 00:15:18.630 "uuid": "a4b7f775-5772-4cd7-b201-cfa924087418" 00:15:18.630 } 00:15:18.630 ] 00:15:18.630 }, 00:15:18.630 { 00:15:18.630 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.630 "subtype": "NVMe", 00:15:18.630 "listen_addresses": [ 00:15:18.630 { 00:15:18.630 "trtype": "VFIOUSER", 00:15:18.630 "adrfam": "IPv4", 00:15:18.630 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.630 "trsvcid": "0" 00:15:18.630 } 00:15:18.630 ], 00:15:18.630 "allow_any_host": true, 00:15:18.630 "hosts": [], 00:15:18.630 "serial_number": "SPDK2", 00:15:18.630 "model_number": "SPDK bdev Controller", 00:15:18.630 "max_namespaces": 32, 00:15:18.630 "min_cntlid": 1, 00:15:18.630 "max_cntlid": 65519, 00:15:18.630 "namespaces": [ 00:15:18.630 { 00:15:18.630 "nsid": 1, 00:15:18.630 "bdev_name": "Malloc2", 00:15:18.630 "name": "Malloc2", 00:15:18.630 "nguid": "1D818864F5C14451BF92073AE81E5584", 00:15:18.630 "uuid": "1d818864-f5c1-4451-bf92-073ae81e5584" 00:15:18.630 } 00:15:18.630 ] 00:15:18.630 } 00:15:18.630 ] 00:15:18.630 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3611918 00:15:18.630 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.630 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.630 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.630 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:18.630 [2024-11-26 19:53:19.396712] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:15:18.630 [2024-11-26 19:53:19.396758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611936 ] 00:15:18.630 [2024-11-26 19:53:19.435355] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:18.630 [2024-11-26 19:53:19.444350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.630 [2024-11-26 19:53:19.444368] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd3feb8b000 00:15:18.630 [2024-11-26 19:53:19.445349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.630 [2024-11-26 19:53:19.446359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.447367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.448376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.449383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.450389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.451394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.452394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.893 [2024-11-26 19:53:19.453405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.893 [2024-11-26 19:53:19.453412] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd3feb80000 00:15:18.893 [2024-11-26 19:53:19.454321] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.893 [2024-11-26 19:53:19.467700] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:18.893 [2024-11-26 19:53:19.467717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:18.893 [2024-11-26 19:53:19.469752] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.893 [2024-11-26 19:53:19.469784] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:18.893 [2024-11-26 19:53:19.469840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:18.893 [2024-11-26 19:53:19.469854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:18.893 [2024-11-26 19:53:19.469858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:18.893 [2024-11-26 19:53:19.470755] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:18.893 [2024-11-26 19:53:19.470764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:18.893 [2024-11-26 19:53:19.470769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:18.893 [2024-11-26 19:53:19.471757] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.893 [2024-11-26 19:53:19.471763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:18.893 [2024-11-26 19:53:19.471769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:18.893 [2024-11-26 19:53:19.472765] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:18.893 [2024-11-26 19:53:19.472772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:18.893 [2024-11-26 19:53:19.473774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:18.893 [2024-11-26 19:53:19.473780] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:18.893 [2024-11-26 19:53:19.473783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:18.893 [2024-11-26 19:53:19.473788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:18.893 [2024-11-26 19:53:19.473894] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:18.894 [2024-11-26 19:53:19.473897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:18.894 [2024-11-26 19:53:19.473901] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:18.894 [2024-11-26 19:53:19.474783] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:18.894 [2024-11-26 19:53:19.475789] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:18.894 [2024-11-26 19:53:19.476792] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.894 [2024-11-26 19:53:19.477798] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.894 [2024-11-26 19:53:19.477829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:18.894 [2024-11-26 19:53:19.478808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:18.894 [2024-11-26 19:53:19.478815] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:18.894 [2024-11-26 19:53:19.478820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.478835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:18.894 [2024-11-26 19:53:19.478843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.478854] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.894 [2024-11-26 19:53:19.478858] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.894 [2024-11-26 19:53:19.478861] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.894 [2024-11-26 19:53:19.478869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.485165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.485175] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:18.894 [2024-11-26 19:53:19.485178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:18.894 [2024-11-26 19:53:19.485181] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:18.894 [2024-11-26 19:53:19.485185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:18.894 [2024-11-26 19:53:19.485188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:18.894 [2024-11-26 19:53:19.485191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:18.894 [2024-11-26 19:53:19.485195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.485200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.485207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.493163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.493173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.894 [2024-11-26 19:53:19.493179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.894 [2024-11-26 19:53:19.493185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.894 [2024-11-26 19:53:19.493191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.894 [2024-11-26 19:53:19.493195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.493202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.493208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.501163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.501171] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:18.894 [2024-11-26 19:53:19.501174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.501181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.501185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.501191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.509163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.509212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.509218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.509223] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:18.894 [2024-11-26 19:53:19.509226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:18.894 [2024-11-26 19:53:19.509229] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.894 [2024-11-26 19:53:19.509233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.517164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.517175] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:18.894 [2024-11-26 19:53:19.517181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.517186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.517191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.894 [2024-11-26 19:53:19.517194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.894 [2024-11-26 19:53:19.517197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.894 [2024-11-26 19:53:19.517201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.525163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.525172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.525178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.525183] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.894 [2024-11-26 19:53:19.525186] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.894 [2024-11-26 19:53:19.525189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.894 [2024-11-26 19:53:19.525195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.533164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.533174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533199] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.894 [2024-11-26 19:53:19.533203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:18.894 [2024-11-26 19:53:19.533206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:18.894 [2024-11-26 19:53:19.533219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.541164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.541175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.549163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.549174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:18.894 [2024-11-26 19:53:19.557162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:18.894 [2024-11-26 19:53:19.557171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.895 [2024-11-26 19:53:19.565163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:18.895 [2024-11-26 19:53:19.565182] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:18.895 [2024-11-26 19:53:19.565185] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:18.895 [2024-11-26 19:53:19.565188] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:18.895 [2024-11-26 19:53:19.565190] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:18.895 [2024-11-26 19:53:19.565193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:18.895 [2024-11-26 19:53:19.565197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:18.895 [2024-11-26 19:53:19.565203] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:18.895 [2024-11-26 19:53:19.565208] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:18.895 [2024-11-26 19:53:19.565211] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.895 [2024-11-26 19:53:19.565215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:18.895 [2024-11-26 19:53:19.565220] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:18.895 [2024-11-26 19:53:19.565224] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.895 [2024-11-26 19:53:19.565226] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.895 [2024-11-26 19:53:19.565230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.895 [2024-11-26 19:53:19.565236] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:18.895 [2024-11-26 19:53:19.565239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:18.895 [2024-11-26 19:53:19.565241] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.895 [2024-11-26 19:53:19.565246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:18.895 [2024-11-26 19:53:19.573163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:18.895 [2024-11-26 19:53:19.573175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:18.895 [2024-11-26 19:53:19.573182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:18.895 [2024-11-26 19:53:19.573187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:18.895 ===================================================== 00:15:18.895 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.895 ===================================================== 00:15:18.895 Controller Capabilities/Features 00:15:18.895 ================================ 00:15:18.895 Vendor ID: 4e58 00:15:18.895 Subsystem Vendor ID: 4e58 00:15:18.895 Serial Number: SPDK2 00:15:18.895 Model Number: SPDK bdev Controller 00:15:18.895 Firmware Version: 25.01 00:15:18.895 Recommended Arb Burst: 6 00:15:18.895 IEEE OUI Identifier: 8d 6b 50 00:15:18.895 Multi-path I/O 00:15:18.895 May have multiple subsystem ports: Yes 00:15:18.895 May have multiple controllers: Yes 00:15:18.895 Associated with SR-IOV VF: No 00:15:18.895 Max Data Transfer Size: 131072 00:15:18.895 Max Number of Namespaces: 32 00:15:18.895 Max Number of I/O Queues: 127 00:15:18.895 NVMe Specification Version (VS): 1.3 00:15:18.895 NVMe Specification Version (Identify): 1.3 00:15:18.895 Maximum Queue Entries: 256 00:15:18.895 Contiguous Queues Required: Yes 00:15:18.895 Arbitration Mechanisms Supported 00:15:18.895 Weighted Round Robin: Not Supported 00:15:18.895 Vendor Specific: Not Supported 00:15:18.895 Reset Timeout: 15000 ms 00:15:18.895 Doorbell Stride: 4 bytes 00:15:18.895 NVM Subsystem Reset: Not Supported 00:15:18.895 Command Sets Supported 00:15:18.895 NVM Command Set: Supported 00:15:18.895 Boot Partition: Not Supported 00:15:18.895 Memory Page Size Minimum: 4096 bytes 00:15:18.895 Memory Page Size Maximum: 4096 bytes 00:15:18.895 Persistent Memory Region: Not Supported 00:15:18.895 Optional Asynchronous Events Supported 00:15:18.895 Namespace Attribute Notices: Supported 00:15:18.895 Firmware Activation Notices: Not Supported 00:15:18.895 ANA Change Notices: Not Supported 00:15:18.895 PLE Aggregate Log Change Notices: Not Supported 00:15:18.895 LBA Status Info Alert Notices: Not Supported 00:15:18.895 EGE Aggregate Log Change Notices: Not Supported 00:15:18.895 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.895 Zone Descriptor Change Notices: Not Supported 00:15:18.895 Discovery Log Change Notices: Not Supported 00:15:18.895 Controller Attributes 00:15:18.895 128-bit Host Identifier: Supported 00:15:18.895 Non-Operational Permissive Mode: Not Supported 00:15:18.895 NVM Sets: Not Supported 00:15:18.895 Read Recovery Levels: Not Supported 00:15:18.895 Endurance Groups: Not Supported 00:15:18.895 Predictable Latency Mode: Not Supported 00:15:18.895 Traffic Based Keep ALive: Not Supported 00:15:18.895 Namespace Granularity: Not Supported 00:15:18.895 SQ Associations: Not Supported 00:15:18.895 UUID List: Not Supported 00:15:18.895 Multi-Domain Subsystem: Not Supported 00:15:18.895 Fixed Capacity Management: Not Supported 00:15:18.895 Variable Capacity Management: Not Supported 00:15:18.895 Delete Endurance Group: Not Supported 00:15:18.895 Delete NVM Set: Not Supported 00:15:18.895 Extended LBA Formats Supported: Not Supported 00:15:18.895 Flexible Data Placement Supported: Not Supported 00:15:18.895 00:15:18.895 Controller Memory Buffer Support 00:15:18.895 ================================ 00:15:18.895 Supported: No 00:15:18.895 00:15:18.895 Persistent Memory Region Support 00:15:18.895 ================================ 00:15:18.895 Supported: No 00:15:18.895 00:15:18.895 Admin Command Set Attributes 00:15:18.895 ============================ 00:15:18.895 Security Send/Receive: Not Supported 00:15:18.895 Format NVM: Not Supported 00:15:18.895 Firmware Activate/Download: Not Supported 00:15:18.895 Namespace Management: Not Supported 00:15:18.895 Device Self-Test: Not Supported 00:15:18.895 Directives: Not Supported 00:15:18.895 NVMe-MI: Not Supported 00:15:18.895 Virtualization Management: Not Supported 00:15:18.895 Doorbell Buffer Config: Not Supported 00:15:18.895 Get LBA Status Capability: Not Supported 00:15:18.895 Command & Feature Lockdown Capability: Not Supported 00:15:18.895 Abort Command Limit: 4 00:15:18.895 Async Event Request Limit: 4 00:15:18.895 Number of Firmware Slots: N/A 00:15:18.895 Firmware Slot 1 Read-Only: N/A 00:15:18.895 Firmware Activation Without Reset: N/A 00:15:18.895 Multiple Update Detection Support: N/A 00:15:18.895 Firmware Update Granularity: No Information Provided 00:15:18.895 Per-Namespace SMART Log: No 00:15:18.895 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.895 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:18.895 Command Effects Log Page: Supported 00:15:18.895 Get Log Page Extended Data: Supported 00:15:18.895 Telemetry Log Pages: Not Supported 00:15:18.895 Persistent Event Log Pages: Not Supported 00:15:18.895 Supported Log Pages Log Page: May Support 00:15:18.895 Commands Supported & Effects Log Page: Not Supported 00:15:18.895 Feature Identifiers & Effects Log Page:May Support 00:15:18.895 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.895 Data Area 4 for Telemetry Log: Not Supported 00:15:18.895 Error Log Page Entries Supported: 128 00:15:18.895 Keep Alive: Supported 00:15:18.895 Keep Alive Granularity: 10000 ms 00:15:18.895 00:15:18.895 NVM Command Set Attributes 00:15:18.895 ========================== 00:15:18.895 Submission Queue Entry Size 00:15:18.895 Max: 64 00:15:18.895 Min: 64 00:15:18.895 Completion Queue Entry Size 00:15:18.895 Max: 16 00:15:18.895 Min: 16 00:15:18.895 Number of Namespaces: 32 00:15:18.895 Compare Command: Supported 00:15:18.895 Write Uncorrectable Command: Not Supported 00:15:18.895 Dataset Management Command: Supported 00:15:18.895 Write Zeroes Command: Supported 00:15:18.895 Set Features Save Field: Not Supported 00:15:18.895 Reservations: Not Supported 00:15:18.895 Timestamp: Not Supported 00:15:18.895 Copy: Supported 00:15:18.895 Volatile Write Cache: Present 00:15:18.895 Atomic Write Unit (Normal): 1 00:15:18.895 Atomic Write Unit (PFail): 1 00:15:18.895 Atomic Compare & Write Unit: 1 00:15:18.895 Fused Compare & Write: Supported 00:15:18.895 Scatter-Gather List 00:15:18.895 SGL Command Set: Supported (Dword aligned) 00:15:18.895 SGL Keyed: Not Supported 00:15:18.895 SGL Bit Bucket Descriptor: Not Supported 00:15:18.895 SGL Metadata Pointer: Not Supported 00:15:18.895 Oversized SGL: Not Supported 00:15:18.895 SGL Metadata Address: Not Supported 00:15:18.895 SGL Offset: Not Supported 00:15:18.895 Transport SGL Data Block: Not Supported 00:15:18.895 Replay Protected Memory Block: Not Supported 00:15:18.896 00:15:18.896 Firmware Slot Information 00:15:18.896 ========================= 00:15:18.896 Active slot: 1 00:15:18.896 Slot 1 Firmware Revision: 25.01 00:15:18.896 00:15:18.896 00:15:18.896 Commands Supported and Effects 00:15:18.896 ============================== 00:15:18.896 Admin Commands 00:15:18.896 -------------- 00:15:18.896 Get Log Page (02h): Supported 00:15:18.896 Identify (06h): Supported 00:15:18.896 Abort (08h): Supported 00:15:18.896 Set Features (09h): Supported 00:15:18.896 Get Features (0Ah): Supported 00:15:18.896 Asynchronous Event Request (0Ch): Supported 00:15:18.896 Keep Alive (18h): Supported 00:15:18.896 I/O Commands 00:15:18.896 ------------ 00:15:18.896 Flush (00h): Supported LBA-Change 00:15:18.896 Write (01h): Supported LBA-Change 00:15:18.896 Read (02h): Supported 00:15:18.896 Compare (05h): Supported 00:15:18.896 Write Zeroes (08h): Supported LBA-Change 00:15:18.896 Dataset Management (09h): Supported LBA-Change 00:15:18.896 Copy (19h): Supported LBA-Change 00:15:18.896 00:15:18.896 Error Log 00:15:18.896 ========= 00:15:18.896 00:15:18.896 Arbitration 00:15:18.896 =========== 00:15:18.896 Arbitration Burst: 1 00:15:18.896 00:15:18.896 Power Management 00:15:18.896 ================ 00:15:18.896 Number of Power States: 1 00:15:18.896 Current Power State: Power State #0 00:15:18.896 Power State #0: 00:15:18.896 Max Power: 0.00 W 00:15:18.896 Non-Operational State: Operational 00:15:18.896 Entry Latency: Not Reported 00:15:18.896 Exit Latency: Not Reported 00:15:18.896 Relative Read Throughput: 0 00:15:18.896 Relative Read Latency: 0 00:15:18.896 Relative Write Throughput: 0 00:15:18.896 Relative Write Latency: 0 00:15:18.896 Idle Power: Not Reported 00:15:18.896 Active Power: Not Reported 00:15:18.896 Non-Operational Permissive Mode: Not Supported 00:15:18.896 00:15:18.896 Health Information 00:15:18.896 ================== 00:15:18.896 Critical Warnings: 00:15:18.896 Available Spare Space: OK 00:15:18.896 Temperature: OK 00:15:18.896 Device Reliability: OK 00:15:18.896 Read Only: No 00:15:18.896 Volatile Memory Backup: OK 00:15:18.896 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:18.896 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.896 Available Spare: 0% 00:15:18.896 Available Sp[2024-11-26 19:53:19.573259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:18.896 [2024-11-26 19:53:19.584163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:18.896 [2024-11-26 19:53:19.584190] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:18.896 [2024-11-26 19:53:19.584196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.896 [2024-11-26 19:53:19.584201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.896 [2024-11-26 19:53:19.584206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.896 [2024-11-26 19:53:19.584210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.896 [2024-11-26 19:53:19.584246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.896 [2024-11-26 19:53:19.584253] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:18.896 [2024-11-26 19:53:19.585248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.896 [2024-11-26 19:53:19.585287] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:18.896 [2024-11-26 19:53:19.585292] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:18.896 [2024-11-26 19:53:19.586251] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:18.896 [2024-11-26 19:53:19.586263] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:18.896 [2024-11-26 19:53:19.586304] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:18.896 [2024-11-26 19:53:19.587276] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.896 are Threshold: 0% 00:15:18.896 Life Percentage Used: 0% 00:15:18.896 Data Units Read: 0 00:15:18.896 Data Units Written: 0 00:15:18.896 Host Read Commands: 0 00:15:18.896 Host Write Commands: 0 00:15:18.896 Controller Busy Time: 0 minutes 00:15:18.896 Power Cycles: 0 00:15:18.896 Power On Hours: 0 hours 00:15:18.896 Unsafe Shutdowns: 0 00:15:18.896 Unrecoverable Media Errors: 0 00:15:18.896 Lifetime Error Log Entries: 0 00:15:18.896 Warning Temperature Time: 0 minutes 00:15:18.896 Critical Temperature Time: 0 minutes 00:15:18.896 00:15:18.896 Number of Queues 00:15:18.896 ================ 00:15:18.896 Number of I/O Submission Queues: 127 00:15:18.896 Number of I/O Completion Queues: 127 00:15:18.896 00:15:18.896 Active Namespaces 00:15:18.896 ================= 00:15:18.896 Namespace ID:1 00:15:18.896 Error Recovery Timeout: Unlimited 00:15:18.896 Command Set Identifier: NVM (00h) 00:15:18.896 Deallocate: Supported 00:15:18.896 Deallocated/Unwritten Error: Not Supported 00:15:18.896 Deallocated Read Value: Unknown 00:15:18.896 Deallocate in Write Zeroes: Not Supported 00:15:18.896 Deallocated Guard Field: 0xFFFF 00:15:18.896 Flush: Supported 00:15:18.896 Reservation: Supported 00:15:18.896 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.896 Size (in LBAs): 131072 (0GiB) 00:15:18.896 Capacity (in LBAs): 131072 (0GiB) 00:15:18.896 Utilization (in LBAs): 131072 (0GiB) 00:15:18.896 NGUID: 1D818864F5C14451BF92073AE81E5584 00:15:18.896 UUID: 1d818864-f5c1-4451-bf92-073ae81e5584 00:15:18.896 Thin Provisioning: Not Supported 00:15:18.896 Per-NS Atomic Units: Yes 00:15:18.896 Atomic Boundary Size (Normal): 0 00:15:18.896 Atomic Boundary Size (PFail): 0 00:15:18.896 Atomic Boundary Offset: 0 00:15:18.896 Maximum Single Source Range Length: 65535 00:15:18.896 Maximum Copy Length: 65535 00:15:18.896 Maximum Source Range Count: 1 00:15:18.896 NGUID/EUI64 Never Reused: No 00:15:18.896 Namespace Write Protected: No 00:15:18.896 Number of LBA Formats: 1 00:15:18.896 Current LBA Format: LBA Format #00 00:15:18.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.896 00:15:18.896 19:53:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:19.157 [2024-11-26 19:53:19.776549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.444 Initializing NVMe Controllers 00:15:24.444 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.444 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.444 Initialization complete. Launching workers. 00:15:24.444 ======================================================== 00:15:24.444 Latency(us) 00:15:24.444 Device Information : IOPS MiB/s Average min max 00:15:24.444 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40056.00 156.47 3197.90 862.49 7532.90 00:15:24.444 ======================================================== 00:15:24.444 Total : 40056.00 156.47 3197.90 862.49 7532.90 00:15:24.444 00:15:24.444 [2024-11-26 19:53:24.883360] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.444 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:24.444 [2024-11-26 19:53:25.074972] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.755 Initializing NVMe Controllers 00:15:29.755 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.755 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.755 Initialization complete. Launching workers. 00:15:29.755 ======================================================== 00:15:29.755 Latency(us) 00:15:29.755 Device Information : IOPS MiB/s Average min max 00:15:29.755 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39962.05 156.10 3202.92 868.56 8718.00 00:15:29.755 ======================================================== 00:15:29.755 Total : 39962.05 156.10 3202.92 868.56 8718.00 00:15:29.755 00:15:29.755 [2024-11-26 19:53:30.093838] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.755 19:53:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:29.755 [2024-11-26 19:53:30.295463] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.042 [2024-11-26 19:53:35.423256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.042 Initializing NVMe Controllers 00:15:35.042 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:35.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:35.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:35.043 Initialization complete. Launching workers. 00:15:35.043 Starting thread on core 2 00:15:35.043 Starting thread on core 3 00:15:35.043 Starting thread on core 1 00:15:35.043 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:35.043 [2024-11-26 19:53:35.678686] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.343 [2024-11-26 19:53:38.732018] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.343 Initializing NVMe Controllers 00:15:38.343 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.343 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.343 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:38.343 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:38.343 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:38.343 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:38.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:38.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:38.343 Initialization complete. Launching workers. 00:15:38.343 Starting thread on core 1 with urgent priority queue 00:15:38.343 Starting thread on core 2 with urgent priority queue 00:15:38.343 Starting thread on core 3 with urgent priority queue 00:15:38.343 Starting thread on core 0 with urgent priority queue 00:15:38.343 SPDK bdev Controller (SPDK2 ) core 0: 12104.00 IO/s 8.26 secs/100000 ios 00:15:38.343 SPDK bdev Controller (SPDK2 ) core 1: 13891.33 IO/s 7.20 secs/100000 ios 00:15:38.343 SPDK bdev Controller (SPDK2 ) core 2: 14644.67 IO/s 6.83 secs/100000 ios 00:15:38.343 SPDK bdev Controller (SPDK2 ) core 3: 12720.33 IO/s 7.86 secs/100000 ios 00:15:38.343 ======================================================== 00:15:38.343 00:15:38.343 19:53:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.343 [2024-11-26 19:53:38.983550] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:38.343 Initializing NVMe Controllers 00:15:38.343 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.343 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:38.343 Namespace ID: 1 size: 0GB 00:15:38.343 Initialization complete. 00:15:38.343 INFO: using host memory buffer for IO 00:15:38.343 Hello world! 00:15:38.343 [2024-11-26 19:53:38.993625] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:38.343 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.604 [2024-11-26 19:53:39.231520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.541 Initializing NVMe Controllers 00:15:39.541 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.541 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.541 Initialization complete. Launching workers. 00:15:39.541 submit (in ns) avg, min, max = 6353.6, 2827.5, 4994045.0 00:15:39.541 complete (in ns) avg, min, max = 15994.9, 1624.2, 5991890.0 00:15:39.541 00:15:39.541 Submit histogram 00:15:39.541 ================ 00:15:39.541 Range in us Cumulative Count 00:15:39.541 2.827 - 2.840: 0.3127% ( 63) 00:15:39.541 2.840 - 2.853: 1.3452% ( 208) 00:15:39.541 2.853 - 2.867: 3.6434% ( 463) 00:15:39.541 2.867 - 2.880: 8.1158% ( 901) 00:15:39.541 2.880 - 2.893: 13.9035% ( 1166) 00:15:39.541 2.893 - 2.907: 19.5324% ( 1134) 00:15:39.541 2.907 - 2.920: 25.8463% ( 1272) 00:15:39.541 2.920 - 2.933: 31.5844% ( 1156) 00:15:39.541 2.933 - 2.947: 37.5211% ( 1196) 00:15:39.541 2.947 - 2.960: 41.9041% ( 883) 00:15:39.541 2.960 - 2.973: 47.0317% ( 1033) 00:15:39.541 2.973 - 2.987: 52.0649% ( 1014) 00:15:39.541 2.987 - 3.000: 59.2971% ( 1457) 00:15:39.541 3.000 - 3.013: 68.1475% ( 1783) 00:15:39.541 3.013 - 3.027: 76.4966% ( 1682) 00:15:39.541 3.027 - 3.040: 83.2374% ( 1358) 00:15:39.541 3.040 - 3.053: 89.1343% ( 1188) 00:15:39.541 3.053 - 3.067: 93.8946% ( 959) 00:15:39.541 3.067 - 3.080: 96.4311% ( 511) 00:15:39.541 3.080 - 3.093: 98.1187% ( 340) 00:15:39.541 3.093 - 3.107: 98.8832% ( 154) 00:15:39.541 3.107 - 3.120: 99.2455% ( 73) 00:15:39.541 3.120 - 3.133: 99.3895% ( 29) 00:15:39.541 3.133 - 3.147: 99.4639% ( 15) 00:15:39.541 3.147 - 3.160: 99.4838% ( 4) 00:15:39.541 3.160 - 3.173: 99.5086% ( 5) 00:15:39.541 3.173 - 3.187: 99.5235% ( 3) 00:15:39.541 3.280 - 3.293: 99.5284% ( 1) 00:15:39.541 3.347 - 3.360: 99.5334% ( 1) 00:15:39.541 3.413 - 3.440: 99.5384% ( 1) 00:15:39.541 3.467 - 3.493: 99.5433% ( 1) 00:15:39.541 3.573 - 3.600: 99.5483% ( 1) 00:15:39.541 3.787 - 3.813: 99.5533% ( 1) 00:15:39.541 3.840 - 3.867: 99.5582% ( 1) 00:15:39.541 3.893 - 3.920: 99.5682% ( 2) 00:15:39.541 4.240 - 4.267: 99.5731% ( 1) 00:15:39.541 4.347 - 4.373: 99.5781% ( 1) 00:15:39.541 4.507 - 4.533: 99.5880% ( 2) 00:15:39.541 4.560 - 4.587: 99.5979% ( 2) 00:15:39.541 4.640 - 4.667: 99.6128% ( 3) 00:15:39.541 4.773 - 4.800: 99.6178% ( 1) 00:15:39.541 4.800 - 4.827: 99.6228% ( 1) 00:15:39.541 4.853 - 4.880: 99.6277% ( 1) 00:15:39.541 4.880 - 4.907: 99.6327% ( 1) 00:15:39.541 4.933 - 4.960: 99.6376% ( 1) 00:15:39.541 4.960 - 4.987: 99.6525% ( 3) 00:15:39.541 4.987 - 5.013: 99.6575% ( 1) 00:15:39.541 5.013 - 5.040: 99.6774% ( 4) 00:15:39.541 5.040 - 5.067: 99.6873% ( 2) 00:15:39.542 5.067 - 5.093: 99.6922% ( 1) 00:15:39.542 5.093 - 5.120: 99.7022% ( 2) 00:15:39.542 5.120 - 5.147: 99.7121% ( 2) 00:15:39.542 5.147 - 5.173: 99.7220% ( 2) 00:15:39.542 5.200 - 5.227: 99.7419% ( 4) 00:15:39.542 5.227 - 5.253: 99.7468% ( 1) 00:15:39.542 5.280 - 5.307: 99.7518% ( 1) 00:15:39.542 5.333 - 5.360: 99.7568% ( 1) 00:15:39.542 5.360 - 5.387: 99.7617% ( 1) 00:15:39.542 5.387 - 5.413: 99.7667% ( 1) 00:15:39.542 5.413 - 5.440: 99.7717% ( 1) 00:15:39.542 5.440 - 5.467: 99.7766% ( 1) 00:15:39.542 5.493 - 5.520: 99.7866% ( 2) 00:15:39.542 5.520 - 5.547: 99.7915% ( 1) 00:15:39.542 5.573 - 5.600: 99.7965% ( 1) 00:15:39.542 5.600 - 5.627: 99.8064% ( 2) 00:15:39.542 5.627 - 5.653: 99.8114% ( 1) 00:15:39.542 5.653 - 5.680: 99.8163% ( 1) 00:15:39.542 5.680 - 5.707: 99.8312% ( 3) 00:15:39.542 5.760 - 5.787: 99.8362% ( 1) 00:15:39.542 5.813 - 5.840: 99.8561% ( 4) 00:15:39.542 5.947 - 5.973: 99.8610% ( 1) 00:15:39.542 6.053 - 6.080: 99.8709% ( 2) 00:15:39.542 6.080 - 6.107: 99.8759% ( 1) 00:15:39.542 6.187 - 6.213: 99.8858% ( 2) 00:15:39.542 6.267 - 6.293: 99.8908% ( 1) 00:15:39.542 6.347 - 6.373: 99.8958% ( 1) 00:15:39.542 6.773 - 6.800: 99.9007% ( 1) 00:15:39.542 7.413 - 7.467: 99.9057% ( 1) 00:15:39.542 8.213 - 8.267: 99.9107% ( 1) 00:15:39.542 [2024-11-26 19:53:40.332721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.802 9.120 - 9.173: 99.9156% ( 1) 00:15:39.802 3031.040 - 3044.693: 99.9206% ( 1) 00:15:39.802 3986.773 - 4014.080: 99.9950% ( 15) 00:15:39.802 4969.813 - 4997.120: 100.0000% ( 1) 00:15:39.802 00:15:39.802 Complete histogram 00:15:39.802 ================== 00:15:39.802 Range in us Cumulative Count 00:15:39.802 1.620 - 1.627: 0.0050% ( 1) 00:15:39.802 1.627 - 1.633: 0.0099% ( 1) 00:15:39.802 1.633 - 1.640: 0.2680% ( 52) 00:15:39.802 1.640 - 1.647: 0.9977% ( 147) 00:15:39.802 1.647 - 1.653: 1.0523% ( 11) 00:15:39.802 1.653 - 1.660: 1.1764% ( 25) 00:15:39.802 1.660 - 1.667: 1.2856% ( 22) 00:15:39.802 1.667 - 1.673: 1.3055% ( 4) 00:15:39.802 1.673 - 1.680: 1.3253% ( 4) 00:15:39.802 1.680 - 1.687: 6.1997% ( 982) 00:15:39.802 1.687 - 1.693: 31.4157% ( 5080) 00:15:39.802 1.693 - 1.700: 58.9149% ( 5540) 00:15:39.802 1.700 - 1.707: 65.8443% ( 1396) 00:15:39.802 1.707 - 1.720: 79.9017% ( 2832) 00:15:39.802 1.720 - 1.733: 83.4359% ( 712) 00:15:39.802 1.733 - 1.747: 85.0194% ( 319) 00:15:39.802 1.747 - 1.760: 88.6578% ( 733) 00:15:39.802 1.760 - 1.773: 94.1030% ( 1097) 00:15:39.802 1.773 - 1.787: 97.4635% ( 677) 00:15:39.802 1.787 - 1.800: 98.8980% ( 289) 00:15:39.802 1.800 - 1.813: 99.3299% ( 87) 00:15:39.802 1.813 - 1.827: 99.4788% ( 30) 00:15:39.802 1.827 - 1.840: 99.4937% ( 3) 00:15:39.802 2.000 - 2.013: 99.4987% ( 1) 00:15:39.802 3.413 - 3.440: 99.5036% ( 1) 00:15:39.802 3.600 - 3.627: 99.5086% ( 1) 00:15:39.802 3.653 - 3.680: 99.5136% ( 1) 00:15:39.802 3.680 - 3.707: 99.5185% ( 1) 00:15:39.802 3.707 - 3.733: 99.5284% ( 2) 00:15:39.802 3.813 - 3.840: 99.5384% ( 2) 00:15:39.802 3.947 - 3.973: 99.5483% ( 2) 00:15:39.802 3.973 - 4.000: 99.5533% ( 1) 00:15:39.802 4.053 - 4.080: 99.5582% ( 1) 00:15:39.802 4.080 - 4.107: 99.5682% ( 2) 00:15:39.802 4.107 - 4.133: 99.5781% ( 2) 00:15:39.802 4.187 - 4.213: 99.5880% ( 2) 00:15:39.802 4.213 - 4.240: 99.5930% ( 1) 00:15:39.802 4.347 - 4.373: 99.6029% ( 2) 00:15:39.802 4.453 - 4.480: 99.6079% ( 1) 00:15:39.802 4.533 - 4.560: 99.6128% ( 1) 00:15:39.802 5.067 - 5.093: 99.6178% ( 1) 00:15:39.802 8.267 - 8.320: 99.6228% ( 1) 00:15:39.802 8.960 - 9.013: 99.6277% ( 1) 00:15:39.802 10.240 - 10.293: 99.6327% ( 1) 00:15:39.802 35.413 - 35.627: 99.6376% ( 1) 00:15:39.802 132.267 - 133.120: 99.6426% ( 1) 00:15:39.802 2020.693 - 2034.347: 99.6476% ( 1) 00:15:39.802 3986.773 - 4014.080: 99.9950% ( 70) 00:15:39.802 5980.160 - 6007.467: 100.0000% ( 1) 00:15:39.802 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.802 [ 00:15:39.802 { 00:15:39.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.802 "subtype": "Discovery", 00:15:39.802 "listen_addresses": [], 00:15:39.802 "allow_any_host": true, 00:15:39.802 "hosts": [] 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.802 "subtype": "NVMe", 00:15:39.802 "listen_addresses": [ 00:15:39.802 { 00:15:39.802 "trtype": "VFIOUSER", 00:15:39.802 "adrfam": "IPv4", 00:15:39.802 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.802 "trsvcid": "0" 00:15:39.802 } 00:15:39.802 ], 00:15:39.802 "allow_any_host": true, 00:15:39.802 "hosts": [], 00:15:39.802 "serial_number": "SPDK1", 00:15:39.802 "model_number": "SPDK bdev Controller", 00:15:39.802 "max_namespaces": 32, 00:15:39.802 "min_cntlid": 1, 00:15:39.802 "max_cntlid": 65519, 00:15:39.802 "namespaces": [ 00:15:39.802 { 00:15:39.802 "nsid": 1, 00:15:39.802 "bdev_name": "Malloc1", 00:15:39.802 "name": "Malloc1", 00:15:39.802 "nguid": "38EDC2FF501A4C4B977713080632D40E", 00:15:39.802 "uuid": "38edc2ff-501a-4c4b-9777-13080632d40e" 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "nsid": 2, 00:15:39.802 "bdev_name": "Malloc3", 00:15:39.802 "name": "Malloc3", 00:15:39.802 "nguid": "A4B7F77557724CD7B201CFA924087418", 00:15:39.802 "uuid": "a4b7f775-5772-4cd7-b201-cfa924087418" 00:15:39.802 } 00:15:39.802 ] 00:15:39.802 }, 00:15:39.802 { 00:15:39.802 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.802 "subtype": "NVMe", 00:15:39.802 "listen_addresses": [ 00:15:39.802 { 00:15:39.802 "trtype": "VFIOUSER", 00:15:39.802 "adrfam": "IPv4", 00:15:39.802 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.802 "trsvcid": "0" 00:15:39.802 } 00:15:39.802 ], 00:15:39.802 "allow_any_host": true, 00:15:39.802 "hosts": [], 00:15:39.802 "serial_number": "SPDK2", 00:15:39.802 "model_number": "SPDK bdev Controller", 00:15:39.802 "max_namespaces": 32, 00:15:39.802 "min_cntlid": 1, 00:15:39.802 "max_cntlid": 65519, 00:15:39.802 "namespaces": [ 00:15:39.802 { 00:15:39.802 "nsid": 1, 00:15:39.802 "bdev_name": "Malloc2", 00:15:39.802 "name": "Malloc2", 00:15:39.802 "nguid": "1D818864F5C14451BF92073AE81E5584", 00:15:39.802 "uuid": "1d818864-f5c1-4451-bf92-073ae81e5584" 00:15:39.802 } 00:15:39.802 ] 00:15:39.802 } 00:15:39.802 ] 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3616134 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:39.802 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:40.062 [2024-11-26 19:53:40.702553] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.062 Malloc4 00:15:40.062 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:40.321 [2024-11-26 19:53:40.896804] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.321 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:40.321 Asynchronous Event Request test 00:15:40.321 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.321 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.321 Registering asynchronous event callbacks... 00:15:40.321 Starting namespace attribute notice tests for all controllers... 00:15:40.321 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:40.321 aer_cb - Changed Namespace 00:15:40.321 Cleaning up... 00:15:40.321 [ 00:15:40.321 { 00:15:40.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.321 "subtype": "Discovery", 00:15:40.321 "listen_addresses": [], 00:15:40.321 "allow_any_host": true, 00:15:40.321 "hosts": [] 00:15:40.321 }, 00:15:40.321 { 00:15:40.321 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.321 "subtype": "NVMe", 00:15:40.321 "listen_addresses": [ 00:15:40.321 { 00:15:40.321 "trtype": "VFIOUSER", 00:15:40.321 "adrfam": "IPv4", 00:15:40.321 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.321 "trsvcid": "0" 00:15:40.321 } 00:15:40.321 ], 00:15:40.321 "allow_any_host": true, 00:15:40.321 "hosts": [], 00:15:40.321 "serial_number": "SPDK1", 00:15:40.321 "model_number": "SPDK bdev Controller", 00:15:40.321 "max_namespaces": 32, 00:15:40.321 "min_cntlid": 1, 00:15:40.321 "max_cntlid": 65519, 00:15:40.321 "namespaces": [ 00:15:40.321 { 00:15:40.321 "nsid": 1, 00:15:40.321 "bdev_name": "Malloc1", 00:15:40.321 "name": "Malloc1", 00:15:40.321 "nguid": "38EDC2FF501A4C4B977713080632D40E", 00:15:40.321 "uuid": "38edc2ff-501a-4c4b-9777-13080632d40e" 00:15:40.321 }, 00:15:40.321 { 00:15:40.321 "nsid": 2, 00:15:40.321 "bdev_name": "Malloc3", 00:15:40.321 "name": "Malloc3", 00:15:40.321 "nguid": "A4B7F77557724CD7B201CFA924087418", 00:15:40.321 "uuid": "a4b7f775-5772-4cd7-b201-cfa924087418" 00:15:40.321 } 00:15:40.321 ] 00:15:40.321 }, 00:15:40.321 { 00:15:40.321 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.321 "subtype": "NVMe", 00:15:40.321 "listen_addresses": [ 00:15:40.321 { 00:15:40.321 "trtype": "VFIOUSER", 00:15:40.321 "adrfam": "IPv4", 00:15:40.321 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.321 "trsvcid": "0" 00:15:40.321 } 00:15:40.321 ], 00:15:40.321 "allow_any_host": true, 00:15:40.321 "hosts": [], 00:15:40.321 "serial_number": "SPDK2", 00:15:40.321 "model_number": "SPDK bdev Controller", 00:15:40.321 "max_namespaces": 32, 00:15:40.321 "min_cntlid": 1, 00:15:40.321 "max_cntlid": 65519, 00:15:40.321 "namespaces": [ 00:15:40.321 { 00:15:40.321 "nsid": 1, 00:15:40.321 "bdev_name": "Malloc2", 00:15:40.321 "name": "Malloc2", 00:15:40.321 "nguid": "1D818864F5C14451BF92073AE81E5584", 00:15:40.321 "uuid": "1d818864-f5c1-4451-bf92-073ae81e5584" 00:15:40.321 }, 00:15:40.321 { 00:15:40.321 "nsid": 2, 00:15:40.321 "bdev_name": "Malloc4", 00:15:40.322 "name": "Malloc4", 00:15:40.322 "nguid": "045B06B6A7204C1C9FC69A94E8FE6A53", 00:15:40.322 "uuid": "045b06b6-a720-4c1c-9fc6-9a94e8fe6a53" 00:15:40.322 } 00:15:40.322 ] 00:15:40.322 } 00:15:40.322 ] 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3616134 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3607190 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3607190 ']' 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3607190 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.322 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3607190 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3607190' 00:15:40.582 killing process with pid 3607190 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3607190 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3607190 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3616294 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3616294' 00:15:40.582 Process pid: 3616294 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3616294 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3616294 ']' 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.582 19:53:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:40.582 [2024-11-26 19:53:41.375845] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:40.582 [2024-11-26 19:53:41.376783] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:15:40.582 [2024-11-26 19:53:41.376826] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.841 [2024-11-26 19:53:41.461781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.841 [2024-11-26 19:53:41.491181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.841 [2024-11-26 19:53:41.491213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.841 [2024-11-26 19:53:41.491220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.841 [2024-11-26 19:53:41.491226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.841 [2024-11-26 19:53:41.491232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.841 [2024-11-26 19:53:41.492486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.841 [2024-11-26 19:53:41.492638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.841 [2024-11-26 19:53:41.492787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.841 [2024-11-26 19:53:41.492789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.841 [2024-11-26 19:53:41.544484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:40.841 [2024-11-26 19:53:41.545474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:40.841 [2024-11-26 19:53:41.546287] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:40.841 [2024-11-26 19:53:41.546954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:40.841 [2024-11-26 19:53:41.546978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:41.412 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.412 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:41.412 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:42.795 Malloc1 00:15:42.795 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:43.057 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:43.318 19:53:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:43.579 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.579 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:43.579 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:43.579 Malloc2 00:15:43.579 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:43.841 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:44.102 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3616294 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3616294 ']' 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3616294 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3616294 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3616294' 00:15:44.363 killing process with pid 3616294 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3616294 00:15:44.363 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3616294 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.363 00:15:44.363 real 0m50.999s 00:15:44.363 user 3m15.425s 00:15:44.363 sys 0m2.752s 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.363 ************************************ 00:15:44.363 END TEST nvmf_vfio_user 00:15:44.363 ************************************ 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.363 19:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.625 ************************************ 00:15:44.625 START TEST nvmf_vfio_user_nvme_compliance 00:15:44.625 ************************************ 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.626 * Looking for test storage... 00:15:44.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.626 --rc genhtml_branch_coverage=1 00:15:44.626 --rc genhtml_function_coverage=1 00:15:44.626 --rc genhtml_legend=1 00:15:44.626 --rc geninfo_all_blocks=1 00:15:44.626 --rc geninfo_unexecuted_blocks=1 00:15:44.626 00:15:44.626 ' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.626 --rc genhtml_branch_coverage=1 00:15:44.626 --rc genhtml_function_coverage=1 00:15:44.626 --rc genhtml_legend=1 00:15:44.626 --rc geninfo_all_blocks=1 00:15:44.626 --rc geninfo_unexecuted_blocks=1 00:15:44.626 00:15:44.626 ' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.626 --rc genhtml_branch_coverage=1 00:15:44.626 --rc genhtml_function_coverage=1 00:15:44.626 --rc genhtml_legend=1 00:15:44.626 --rc geninfo_all_blocks=1 00:15:44.626 --rc geninfo_unexecuted_blocks=1 00:15:44.626 00:15:44.626 ' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:44.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.626 --rc genhtml_branch_coverage=1 00:15:44.626 --rc genhtml_function_coverage=1 00:15:44.626 --rc genhtml_legend=1 00:15:44.626 --rc geninfo_all_blocks=1 00:15:44.626 --rc geninfo_unexecuted_blocks=1 00:15:44.626 00:15:44.626 ' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.626 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3617067 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3617067' 00:15:44.627 Process pid: 3617067 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3617067 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3617067 ']' 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.627 19:53:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.888 [2024-11-26 19:53:45.496410] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:15:44.888 [2024-11-26 19:53:45.496485] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.888 [2024-11-26 19:53:45.584804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.888 [2024-11-26 19:53:45.619254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.888 [2024-11-26 19:53:45.619286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.888 [2024-11-26 19:53:45.619292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.888 [2024-11-26 19:53:45.619297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.888 [2024-11-26 19:53:45.619301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.888 [2024-11-26 19:53:45.620620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.888 [2024-11-26 19:53:45.620806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.888 [2024-11-26 19:53:45.620809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.831 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.831 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:45.831 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.773 malloc0 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.773 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.774 19:53:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:46.774 00:15:46.774 00:15:46.774 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.774 http://cunit.sourceforge.net/ 00:15:46.774 00:15:46.774 00:15:46.774 Suite: nvme_compliance 00:15:46.774 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 19:53:47.544342] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.774 [2024-11-26 19:53:47.545633] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:46.774 [2024-11-26 19:53:47.545645] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:46.774 [2024-11-26 19:53:47.545650] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:46.774 [2024-11-26 19:53:47.547354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.774 passed 00:15:47.035 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 19:53:47.624844] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.035 [2024-11-26 19:53:47.627870] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.035 passed 00:15:47.035 Test: admin_identify_ns ...[2024-11-26 19:53:47.704540] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.035 [2024-11-26 19:53:47.765174] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:47.035 [2024-11-26 19:53:47.773167] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:47.035 [2024-11-26 19:53:47.794250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.035 passed 00:15:47.296 Test: admin_get_features_mandatory_features ...[2024-11-26 19:53:47.868493] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.296 [2024-11-26 19:53:47.871517] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.296 passed 00:15:47.296 Test: admin_get_features_optional_features ...[2024-11-26 19:53:47.950989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.296 [2024-11-26 19:53:47.954007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.296 passed 00:15:47.296 Test: admin_set_features_number_of_queues ...[2024-11-26 19:53:48.026705] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.556 [2024-11-26 19:53:48.131253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.556 passed 00:15:47.556 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 19:53:48.206271] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.556 [2024-11-26 19:53:48.209293] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.556 passed 00:15:47.557 Test: admin_get_log_page_with_lpo ...[2024-11-26 19:53:48.284507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.557 [2024-11-26 19:53:48.356170] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:47.557 [2024-11-26 19:53:48.369218] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.817 passed 00:15:47.817 Test: fabric_property_get ...[2024-11-26 19:53:48.441431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.817 [2024-11-26 19:53:48.442626] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:47.817 [2024-11-26 19:53:48.444449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.817 passed 00:15:47.817 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 19:53:48.520889] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.817 [2024-11-26 19:53:48.522089] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:47.817 [2024-11-26 19:53:48.523905] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.817 passed 00:15:47.817 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 19:53:48.597677] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.080 [2024-11-26 19:53:48.682168] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.080 [2024-11-26 19:53:48.697235] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.080 [2024-11-26 19:53:48.702284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.080 passed 00:15:48.080 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 19:53:48.775508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.080 [2024-11-26 19:53:48.776700] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:48.080 [2024-11-26 19:53:48.778523] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.080 passed 00:15:48.080 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 19:53:48.853236] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.341 [2024-11-26 19:53:48.931165] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:48.341 [2024-11-26 19:53:48.955163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.341 [2024-11-26 19:53:48.960231] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.341 passed 00:15:48.341 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 19:53:49.034441] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.341 [2024-11-26 19:53:49.035640] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:48.341 [2024-11-26 19:53:49.035662] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:48.341 [2024-11-26 19:53:49.037462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.341 passed 00:15:48.341 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 19:53:49.112519] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.602 [2024-11-26 19:53:49.208168] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:48.602 [2024-11-26 19:53:49.216164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:48.602 [2024-11-26 19:53:49.224166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:48.602 [2024-11-26 19:53:49.232164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:48.602 [2024-11-26 19:53:49.261230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.602 passed 00:15:48.602 Test: admin_create_io_sq_verify_pc ...[2024-11-26 19:53:49.333428] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.602 [2024-11-26 19:53:49.352172] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:48.602 [2024-11-26 19:53:49.369600] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.602 passed 00:15:48.862 Test: admin_create_io_qp_max_qps ...[2024-11-26 19:53:49.443065] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.803 [2024-11-26 19:53:50.540169] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:50.375 [2024-11-26 19:53:50.916320] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.375 passed 00:15:50.375 Test: admin_create_io_sq_shared_cq ...[2024-11-26 19:53:50.992160] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.375 [2024-11-26 19:53:51.126166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:50.375 [2024-11-26 19:53:51.163214] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.375 passed 00:15:50.375 00:15:50.375 Run Summary: Type Total Ran Passed Failed Inactive 00:15:50.375 suites 1 1 n/a 0 0 00:15:50.375 tests 18 18 18 0 0 00:15:50.375 asserts 360 360 360 0 n/a 00:15:50.375 00:15:50.375 Elapsed time = 1.486 seconds 00:15:50.636 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3617067 ']' 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3617067' 00:15:50.637 killing process with pid 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3617067 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:50.637 00:15:50.637 real 0m6.188s 00:15:50.637 user 0m17.513s 00:15:50.637 sys 0m0.545s 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.637 ************************************ 00:15:50.637 END TEST nvmf_vfio_user_nvme_compliance 00:15:50.637 ************************************ 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.637 19:53:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.899 ************************************ 00:15:50.899 START TEST nvmf_vfio_user_fuzz 00:15:50.899 ************************************ 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.899 * Looking for test storage... 00:15:50.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.899 --rc genhtml_branch_coverage=1 00:15:50.899 --rc genhtml_function_coverage=1 00:15:50.899 --rc genhtml_legend=1 00:15:50.899 --rc geninfo_all_blocks=1 00:15:50.899 --rc geninfo_unexecuted_blocks=1 00:15:50.899 00:15:50.899 ' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.899 --rc genhtml_branch_coverage=1 00:15:50.899 --rc genhtml_function_coverage=1 00:15:50.899 --rc genhtml_legend=1 00:15:50.899 --rc geninfo_all_blocks=1 00:15:50.899 --rc geninfo_unexecuted_blocks=1 00:15:50.899 00:15:50.899 ' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.899 --rc genhtml_branch_coverage=1 00:15:50.899 --rc genhtml_function_coverage=1 00:15:50.899 --rc genhtml_legend=1 00:15:50.899 --rc geninfo_all_blocks=1 00:15:50.899 --rc geninfo_unexecuted_blocks=1 00:15:50.899 00:15:50.899 ' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.899 --rc genhtml_branch_coverage=1 00:15:50.899 --rc genhtml_function_coverage=1 00:15:50.899 --rc genhtml_legend=1 00:15:50.899 --rc geninfo_all_blocks=1 00:15:50.899 --rc geninfo_unexecuted_blocks=1 00:15:50.899 00:15:50.899 ' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.899 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3618455 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3618455' 00:15:50.900 Process pid: 3618455 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3618455 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3618455 ']' 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.900 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.840 19:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.840 19:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:51.841 19:53:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:52.782 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.783 malloc0 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.783 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:53.044 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:25.163 Fuzzing completed. Shutting down the fuzz application 00:16:25.163 00:16:25.163 Dumping successful admin opcodes: 00:16:25.163 9, 10, 00:16:25.163 Dumping successful io opcodes: 00:16:25.163 0, 00:16:25.163 NS: 0x20000081ef00 I/O qp, Total commands completed: 1417330, total successful commands: 5569, random_seed: 2378656448 00:16:25.163 NS: 0x20000081ef00 admin qp, Total commands completed: 346928, total successful commands: 94, random_seed: 4130156480 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3618455 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3618455 ']' 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3618455 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.163 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618455 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618455' 00:16:25.163 killing process with pid 3618455 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3618455 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3618455 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:25.163 00:16:25.163 real 0m32.807s 00:16:25.163 user 0m37.955s 00:16:25.163 sys 0m24.194s 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 ************************************ 00:16:25.163 END TEST nvmf_vfio_user_fuzz 00:16:25.163 ************************************ 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 ************************************ 00:16:25.163 START TEST nvmf_auth_target 00:16:25.163 ************************************ 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:25.163 * Looking for test storage... 00:16:25.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:25.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.163 --rc genhtml_branch_coverage=1 00:16:25.163 --rc genhtml_function_coverage=1 00:16:25.163 --rc genhtml_legend=1 00:16:25.163 --rc geninfo_all_blocks=1 00:16:25.163 --rc geninfo_unexecuted_blocks=1 00:16:25.163 00:16:25.163 ' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:25.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.163 --rc genhtml_branch_coverage=1 00:16:25.163 --rc genhtml_function_coverage=1 00:16:25.163 --rc genhtml_legend=1 00:16:25.163 --rc geninfo_all_blocks=1 00:16:25.163 --rc geninfo_unexecuted_blocks=1 00:16:25.163 00:16:25.163 ' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:25.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.163 --rc genhtml_branch_coverage=1 00:16:25.163 --rc genhtml_function_coverage=1 00:16:25.163 --rc genhtml_legend=1 00:16:25.163 --rc geninfo_all_blocks=1 00:16:25.163 --rc geninfo_unexecuted_blocks=1 00:16:25.163 00:16:25.163 ' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:25.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.163 --rc genhtml_branch_coverage=1 00:16:25.163 --rc genhtml_function_coverage=1 00:16:25.163 --rc genhtml_legend=1 00:16:25.163 --rc geninfo_all_blocks=1 00:16:25.163 --rc geninfo_unexecuted_blocks=1 00:16:25.163 00:16:25.163 ' 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.163 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.164 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:31.779 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:31.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:31.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:31.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:31.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.780 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.781 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.781 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.781 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:31.781 19:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:31.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:16:31.781 00:16:31.781 --- 10.0.0.2 ping statistics --- 00:16:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.781 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:16:31.781 00:16:31.781 --- 10.0.0.1 ping statistics --- 00:16:31.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.781 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3629021 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3629021 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3629021 ']' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.781 19:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3629365 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.351 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=189a03eed824731dcd903aaff0461ce6c8b57ee811de4868 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.57a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 189a03eed824731dcd903aaff0461ce6c8b57ee811de4868 0 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 189a03eed824731dcd903aaff0461ce6c8b57ee811de4868 0 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=189a03eed824731dcd903aaff0461ce6c8b57ee811de4868 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.57a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.57a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.57a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=55e2eb5c558ea8cf03835afb4ce64dd4f7d33e5636c73b296bd07eadf1281e9a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.09u 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 55e2eb5c558ea8cf03835afb4ce64dd4f7d33e5636c73b296bd07eadf1281e9a 3 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 55e2eb5c558ea8cf03835afb4ce64dd4f7d33e5636c73b296bd07eadf1281e9a 3 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=55e2eb5c558ea8cf03835afb4ce64dd4f7d33e5636c73b296bd07eadf1281e9a 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:32.352 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.09u 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.09u 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.09u 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=de0acd54f465b5513e0745da68b232f4 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Vvj 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key de0acd54f465b5513e0745da68b232f4 1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 de0acd54f465b5513e0745da68b232f4 1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=de0acd54f465b5513e0745da68b232f4 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Vvj 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Vvj 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Vvj 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d9f3c5e96ef42260f3c4ae336435cdc3b648b82117ea5494 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iS1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d9f3c5e96ef42260f3c4ae336435cdc3b648b82117ea5494 2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d9f3c5e96ef42260f3c4ae336435cdc3b648b82117ea5494 2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d9f3c5e96ef42260f3c4ae336435cdc3b648b82117ea5494 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iS1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iS1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iS1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fabdaf47962183855311f7809ddea5a7f1ffb432c01ba119 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SUx 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fabdaf47962183855311f7809ddea5a7f1ffb432c01ba119 2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fabdaf47962183855311f7809ddea5a7f1ffb432c01ba119 2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fabdaf47962183855311f7809ddea5a7f1ffb432c01ba119 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SUx 00:16:32.613 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SUx 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.SUx 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=693c6ce6f5dc12670835fff09b210817 00:16:32.614 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FlH 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 693c6ce6f5dc12670835fff09b210817 1 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 693c6ce6f5dc12670835fff09b210817 1 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=693c6ce6f5dc12670835fff09b210817 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FlH 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FlH 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.FlH 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ebfb9f287161ddc91111a7972680b304d4806de31abf0a8fe512b60130e2f21 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.laA 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ebfb9f287161ddc91111a7972680b304d4806de31abf0a8fe512b60130e2f21 3 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ebfb9f287161ddc91111a7972680b304d4806de31abf0a8fe512b60130e2f21 3 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.875 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ebfb9f287161ddc91111a7972680b304d4806de31abf0a8fe512b60130e2f21 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.laA 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.laA 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.laA 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3629021 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3629021 ']' 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.876 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3629365 /var/tmp/host.sock 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3629365 ']' 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:33.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.137 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.57a 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.57a 00:16:33.398 19:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.57a 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.09u ]] 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.09u 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.09u 00:16:33.398 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.09u 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Vvj 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Vvj 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Vvj 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iS1 ]] 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iS1 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iS1 00:16:33.832 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iS1 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.SUx 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.SUx 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.SUx 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.FlH ]] 00:16:34.227 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FlH 00:16:34.228 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.228 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.228 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.228 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FlH 00:16:34.228 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FlH 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.laA 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.laA 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.laA 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.520 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.781 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.040 00:16:35.040 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.040 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.040 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.301 { 00:16:35.301 "cntlid": 1, 00:16:35.301 "qid": 0, 00:16:35.301 "state": "enabled", 00:16:35.301 "thread": "nvmf_tgt_poll_group_000", 00:16:35.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.301 "listen_address": { 00:16:35.301 "trtype": "TCP", 00:16:35.301 "adrfam": "IPv4", 00:16:35.301 "traddr": "10.0.0.2", 00:16:35.301 "trsvcid": "4420" 00:16:35.301 }, 00:16:35.301 "peer_address": { 00:16:35.301 "trtype": "TCP", 00:16:35.301 "adrfam": "IPv4", 00:16:35.301 "traddr": "10.0.0.1", 00:16:35.301 "trsvcid": "57514" 00:16:35.301 }, 00:16:35.301 "auth": { 00:16:35.301 "state": "completed", 00:16:35.301 "digest": "sha256", 00:16:35.301 "dhgroup": "null" 00:16:35.301 } 00:16:35.301 } 00:16:35.301 ]' 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.301 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.301 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.301 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.301 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.301 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.301 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.561 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:35.561 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:36.130 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.130 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.130 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.130 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.390 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.390 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.390 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.390 19:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.390 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.651 00:16:36.651 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.651 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.651 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.910 { 00:16:36.910 "cntlid": 3, 00:16:36.910 "qid": 0, 00:16:36.910 "state": "enabled", 00:16:36.910 "thread": "nvmf_tgt_poll_group_000", 00:16:36.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.910 "listen_address": { 00:16:36.910 "trtype": "TCP", 00:16:36.910 "adrfam": "IPv4", 00:16:36.910 "traddr": "10.0.0.2", 00:16:36.910 "trsvcid": "4420" 00:16:36.910 }, 00:16:36.910 "peer_address": { 00:16:36.910 "trtype": "TCP", 00:16:36.910 "adrfam": "IPv4", 00:16:36.910 "traddr": "10.0.0.1", 00:16:36.910 "trsvcid": "57544" 00:16:36.910 }, 00:16:36.910 "auth": { 00:16:36.910 "state": "completed", 00:16:36.910 "digest": "sha256", 00:16:36.910 "dhgroup": "null" 00:16:36.910 } 00:16:36.910 } 00:16:36.910 ]' 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.910 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.911 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.911 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.170 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:37.170 19:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:37.738 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.738 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.997 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.256 00:16:38.256 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.256 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.256 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.517 { 00:16:38.517 "cntlid": 5, 00:16:38.517 "qid": 0, 00:16:38.517 "state": "enabled", 00:16:38.517 "thread": "nvmf_tgt_poll_group_000", 00:16:38.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.517 "listen_address": { 00:16:38.517 "trtype": "TCP", 00:16:38.517 "adrfam": "IPv4", 00:16:38.517 "traddr": "10.0.0.2", 00:16:38.517 "trsvcid": "4420" 00:16:38.517 }, 00:16:38.517 "peer_address": { 00:16:38.517 "trtype": "TCP", 00:16:38.517 "adrfam": "IPv4", 00:16:38.517 "traddr": "10.0.0.1", 00:16:38.517 "trsvcid": "57560" 00:16:38.517 }, 00:16:38.517 "auth": { 00:16:38.517 "state": "completed", 00:16:38.517 "digest": "sha256", 00:16:38.517 "dhgroup": "null" 00:16:38.517 } 00:16:38.517 } 00:16:38.517 ]' 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.517 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.778 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:38.778 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:39.348 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.609 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.869 00:16:39.869 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.869 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.869 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.129 { 00:16:40.129 "cntlid": 7, 00:16:40.129 "qid": 0, 00:16:40.129 "state": "enabled", 00:16:40.129 "thread": "nvmf_tgt_poll_group_000", 00:16:40.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.129 "listen_address": { 00:16:40.129 "trtype": "TCP", 00:16:40.129 "adrfam": "IPv4", 00:16:40.129 "traddr": "10.0.0.2", 00:16:40.129 "trsvcid": "4420" 00:16:40.129 }, 00:16:40.129 "peer_address": { 00:16:40.129 "trtype": "TCP", 00:16:40.129 "adrfam": "IPv4", 00:16:40.129 "traddr": "10.0.0.1", 00:16:40.129 "trsvcid": "57580" 00:16:40.129 }, 00:16:40.129 "auth": { 00:16:40.129 "state": "completed", 00:16:40.129 "digest": "sha256", 00:16:40.129 "dhgroup": "null" 00:16:40.129 } 00:16:40.129 } 00:16:40.129 ]' 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.129 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.130 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.130 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.130 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.130 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.130 19:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.388 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:40.389 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:40.958 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.958 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.958 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.958 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.219 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.480 00:16:41.480 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.480 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.480 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.740 { 00:16:41.740 "cntlid": 9, 00:16:41.740 "qid": 0, 00:16:41.740 "state": "enabled", 00:16:41.740 "thread": "nvmf_tgt_poll_group_000", 00:16:41.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.740 "listen_address": { 00:16:41.740 "trtype": "TCP", 00:16:41.740 "adrfam": "IPv4", 00:16:41.740 "traddr": "10.0.0.2", 00:16:41.740 "trsvcid": "4420" 00:16:41.740 }, 00:16:41.740 "peer_address": { 00:16:41.740 "trtype": "TCP", 00:16:41.740 "adrfam": "IPv4", 00:16:41.740 "traddr": "10.0.0.1", 00:16:41.740 "trsvcid": "57604" 00:16:41.740 }, 00:16:41.740 "auth": { 00:16:41.740 "state": "completed", 00:16:41.740 "digest": "sha256", 00:16:41.740 "dhgroup": "ffdhe2048" 00:16:41.740 } 00:16:41.740 } 00:16:41.740 ]' 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.740 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.002 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:42.002 19:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:42.572 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.832 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.092 00:16:43.092 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.092 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.092 19:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.352 { 00:16:43.352 "cntlid": 11, 00:16:43.352 "qid": 0, 00:16:43.352 "state": "enabled", 00:16:43.352 "thread": "nvmf_tgt_poll_group_000", 00:16:43.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.352 "listen_address": { 00:16:43.352 "trtype": "TCP", 00:16:43.352 "adrfam": "IPv4", 00:16:43.352 "traddr": "10.0.0.2", 00:16:43.352 "trsvcid": "4420" 00:16:43.352 }, 00:16:43.352 "peer_address": { 00:16:43.352 "trtype": "TCP", 00:16:43.352 "adrfam": "IPv4", 00:16:43.352 "traddr": "10.0.0.1", 00:16:43.352 "trsvcid": "57634" 00:16:43.352 }, 00:16:43.352 "auth": { 00:16:43.352 "state": "completed", 00:16:43.352 "digest": "sha256", 00:16:43.352 "dhgroup": "ffdhe2048" 00:16:43.352 } 00:16:43.352 } 00:16:43.352 ]' 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.352 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.612 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:43.612 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:44.185 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.185 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.185 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.185 19:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.447 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.707 00:16:44.707 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.707 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.707 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.968 { 00:16:44.968 "cntlid": 13, 00:16:44.968 "qid": 0, 00:16:44.968 "state": "enabled", 00:16:44.968 "thread": "nvmf_tgt_poll_group_000", 00:16:44.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.968 "listen_address": { 00:16:44.968 "trtype": "TCP", 00:16:44.968 "adrfam": "IPv4", 00:16:44.968 "traddr": "10.0.0.2", 00:16:44.968 "trsvcid": "4420" 00:16:44.968 }, 00:16:44.968 "peer_address": { 00:16:44.968 "trtype": "TCP", 00:16:44.968 "adrfam": "IPv4", 00:16:44.968 "traddr": "10.0.0.1", 00:16:44.968 "trsvcid": "57482" 00:16:44.968 }, 00:16:44.968 "auth": { 00:16:44.968 "state": "completed", 00:16:44.968 "digest": "sha256", 00:16:44.968 "dhgroup": "ffdhe2048" 00:16:44.968 } 00:16:44.968 } 00:16:44.968 ]' 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.968 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.230 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:45.230 19:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.801 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.062 19:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.322 00:16:46.322 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.322 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.322 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.582 { 00:16:46.582 "cntlid": 15, 00:16:46.582 "qid": 0, 00:16:46.582 "state": "enabled", 00:16:46.582 "thread": "nvmf_tgt_poll_group_000", 00:16:46.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.582 "listen_address": { 00:16:46.582 "trtype": "TCP", 00:16:46.582 "adrfam": "IPv4", 00:16:46.582 "traddr": "10.0.0.2", 00:16:46.582 "trsvcid": "4420" 00:16:46.582 }, 00:16:46.582 "peer_address": { 00:16:46.582 "trtype": "TCP", 00:16:46.582 "adrfam": "IPv4", 00:16:46.582 "traddr": "10.0.0.1", 00:16:46.582 "trsvcid": "57502" 00:16:46.582 }, 00:16:46.582 "auth": { 00:16:46.582 "state": "completed", 00:16:46.582 "digest": "sha256", 00:16:46.582 "dhgroup": "ffdhe2048" 00:16:46.582 } 00:16:46.582 } 00:16:46.582 ]' 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.582 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.843 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:46.843 19:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.414 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.676 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.937 00:16:47.937 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.937 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.937 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.198 { 00:16:48.198 "cntlid": 17, 00:16:48.198 "qid": 0, 00:16:48.198 "state": "enabled", 00:16:48.198 "thread": "nvmf_tgt_poll_group_000", 00:16:48.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.198 "listen_address": { 00:16:48.198 "trtype": "TCP", 00:16:48.198 "adrfam": "IPv4", 00:16:48.198 "traddr": "10.0.0.2", 00:16:48.198 "trsvcid": "4420" 00:16:48.198 }, 00:16:48.198 "peer_address": { 00:16:48.198 "trtype": "TCP", 00:16:48.198 "adrfam": "IPv4", 00:16:48.198 "traddr": "10.0.0.1", 00:16:48.198 "trsvcid": "57532" 00:16:48.198 }, 00:16:48.198 "auth": { 00:16:48.198 "state": "completed", 00:16:48.198 "digest": "sha256", 00:16:48.198 "dhgroup": "ffdhe3072" 00:16:48.198 } 00:16:48.198 } 00:16:48.198 ]' 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.198 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.460 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:48.460 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.031 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.318 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.578 00:16:49.578 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.578 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.578 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.838 { 00:16:49.838 "cntlid": 19, 00:16:49.838 "qid": 0, 00:16:49.838 "state": "enabled", 00:16:49.838 "thread": "nvmf_tgt_poll_group_000", 00:16:49.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.838 "listen_address": { 00:16:49.838 "trtype": "TCP", 00:16:49.838 "adrfam": "IPv4", 00:16:49.838 "traddr": "10.0.0.2", 00:16:49.838 "trsvcid": "4420" 00:16:49.838 }, 00:16:49.838 "peer_address": { 00:16:49.838 "trtype": "TCP", 00:16:49.838 "adrfam": "IPv4", 00:16:49.838 "traddr": "10.0.0.1", 00:16:49.838 "trsvcid": "57548" 00:16:49.838 }, 00:16:49.838 "auth": { 00:16:49.838 "state": "completed", 00:16:49.838 "digest": "sha256", 00:16:49.838 "dhgroup": "ffdhe3072" 00:16:49.838 } 00:16:49.838 } 00:16:49.838 ]' 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.838 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.097 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:50.097 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.681 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.941 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.201 00:16:51.201 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.201 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.201 19:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.461 { 00:16:51.461 "cntlid": 21, 00:16:51.461 "qid": 0, 00:16:51.461 "state": "enabled", 00:16:51.461 "thread": "nvmf_tgt_poll_group_000", 00:16:51.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.461 "listen_address": { 00:16:51.461 "trtype": "TCP", 00:16:51.461 "adrfam": "IPv4", 00:16:51.461 "traddr": "10.0.0.2", 00:16:51.461 "trsvcid": "4420" 00:16:51.461 }, 00:16:51.461 "peer_address": { 00:16:51.461 "trtype": "TCP", 00:16:51.461 "adrfam": "IPv4", 00:16:51.461 "traddr": "10.0.0.1", 00:16:51.461 "trsvcid": "57578" 00:16:51.461 }, 00:16:51.461 "auth": { 00:16:51.461 "state": "completed", 00:16:51.461 "digest": "sha256", 00:16:51.461 "dhgroup": "ffdhe3072" 00:16:51.461 } 00:16:51.461 } 00:16:51.461 ]' 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.461 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.462 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.462 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.462 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.462 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.462 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.722 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:51.722 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.291 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.552 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.812 00:16:52.812 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.812 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.812 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.072 { 00:16:53.072 "cntlid": 23, 00:16:53.072 "qid": 0, 00:16:53.072 "state": "enabled", 00:16:53.072 "thread": "nvmf_tgt_poll_group_000", 00:16:53.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.072 "listen_address": { 00:16:53.072 "trtype": "TCP", 00:16:53.072 "adrfam": "IPv4", 00:16:53.072 "traddr": "10.0.0.2", 00:16:53.072 "trsvcid": "4420" 00:16:53.072 }, 00:16:53.072 "peer_address": { 00:16:53.072 "trtype": "TCP", 00:16:53.072 "adrfam": "IPv4", 00:16:53.072 "traddr": "10.0.0.1", 00:16:53.072 "trsvcid": "57602" 00:16:53.072 }, 00:16:53.072 "auth": { 00:16:53.072 "state": "completed", 00:16:53.072 "digest": "sha256", 00:16:53.072 "dhgroup": "ffdhe3072" 00:16:53.072 } 00:16:53.072 } 00:16:53.072 ]' 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.072 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.332 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:53.332 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.902 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.162 19:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.422 00:16:54.422 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.422 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.422 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.682 { 00:16:54.682 "cntlid": 25, 00:16:54.682 "qid": 0, 00:16:54.682 "state": "enabled", 00:16:54.682 "thread": "nvmf_tgt_poll_group_000", 00:16:54.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.682 "listen_address": { 00:16:54.682 "trtype": "TCP", 00:16:54.682 "adrfam": "IPv4", 00:16:54.682 "traddr": "10.0.0.2", 00:16:54.682 "trsvcid": "4420" 00:16:54.682 }, 00:16:54.682 "peer_address": { 00:16:54.682 "trtype": "TCP", 00:16:54.682 "adrfam": "IPv4", 00:16:54.682 "traddr": "10.0.0.1", 00:16:54.682 "trsvcid": "36466" 00:16:54.682 }, 00:16:54.682 "auth": { 00:16:54.682 "state": "completed", 00:16:54.682 "digest": "sha256", 00:16:54.682 "dhgroup": "ffdhe4096" 00:16:54.682 } 00:16:54.682 } 00:16:54.682 ]' 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.682 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.943 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:54.943 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:16:55.514 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.775 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.775 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.776 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.037 00:16:56.037 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.037 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.037 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.305 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.305 { 00:16:56.305 "cntlid": 27, 00:16:56.305 "qid": 0, 00:16:56.305 "state": "enabled", 00:16:56.305 "thread": "nvmf_tgt_poll_group_000", 00:16:56.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.305 "listen_address": { 00:16:56.305 "trtype": "TCP", 00:16:56.305 "adrfam": "IPv4", 00:16:56.305 "traddr": "10.0.0.2", 00:16:56.305 "trsvcid": "4420" 00:16:56.305 }, 00:16:56.306 "peer_address": { 00:16:56.306 "trtype": "TCP", 00:16:56.306 "adrfam": "IPv4", 00:16:56.306 "traddr": "10.0.0.1", 00:16:56.306 "trsvcid": "36484" 00:16:56.306 }, 00:16:56.306 "auth": { 00:16:56.306 "state": "completed", 00:16:56.306 "digest": "sha256", 00:16:56.306 "dhgroup": "ffdhe4096" 00:16:56.306 } 00:16:56.306 } 00:16:56.306 ]' 00:16:56.306 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.306 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.306 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.306 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.306 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.566 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.566 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.566 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.566 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:56.566 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:16:57.136 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.397 19:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.398 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.659 00:16:57.659 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.659 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.659 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.920 { 00:16:57.920 "cntlid": 29, 00:16:57.920 "qid": 0, 00:16:57.920 "state": "enabled", 00:16:57.920 "thread": "nvmf_tgt_poll_group_000", 00:16:57.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.920 "listen_address": { 00:16:57.920 "trtype": "TCP", 00:16:57.920 "adrfam": "IPv4", 00:16:57.920 "traddr": "10.0.0.2", 00:16:57.920 "trsvcid": "4420" 00:16:57.920 }, 00:16:57.920 "peer_address": { 00:16:57.920 "trtype": "TCP", 00:16:57.920 "adrfam": "IPv4", 00:16:57.920 "traddr": "10.0.0.1", 00:16:57.920 "trsvcid": "36510" 00:16:57.920 }, 00:16:57.920 "auth": { 00:16:57.920 "state": "completed", 00:16:57.920 "digest": "sha256", 00:16:57.920 "dhgroup": "ffdhe4096" 00:16:57.920 } 00:16:57.920 } 00:16:57.920 ]' 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.920 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.181 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.181 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.181 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.181 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:58.181 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.124 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.385 00:16:59.385 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.385 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.385 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.647 { 00:16:59.647 "cntlid": 31, 00:16:59.647 "qid": 0, 00:16:59.647 "state": "enabled", 00:16:59.647 "thread": "nvmf_tgt_poll_group_000", 00:16:59.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.647 "listen_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.2", 00:16:59.647 "trsvcid": "4420" 00:16:59.647 }, 00:16:59.647 "peer_address": { 00:16:59.647 "trtype": "TCP", 00:16:59.647 "adrfam": "IPv4", 00:16:59.647 "traddr": "10.0.0.1", 00:16:59.647 "trsvcid": "36534" 00:16:59.647 }, 00:16:59.647 "auth": { 00:16:59.647 "state": "completed", 00:16:59.647 "digest": "sha256", 00:16:59.647 "dhgroup": "ffdhe4096" 00:16:59.647 } 00:16:59.647 } 00:16:59.647 ]' 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.647 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.909 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:16:59.909 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.480 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.741 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.002 00:17:01.002 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.003 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.003 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.264 { 00:17:01.264 "cntlid": 33, 00:17:01.264 "qid": 0, 00:17:01.264 "state": "enabled", 00:17:01.264 "thread": "nvmf_tgt_poll_group_000", 00:17:01.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.264 "listen_address": { 00:17:01.264 "trtype": "TCP", 00:17:01.264 "adrfam": "IPv4", 00:17:01.264 "traddr": "10.0.0.2", 00:17:01.264 "trsvcid": "4420" 00:17:01.264 }, 00:17:01.264 "peer_address": { 00:17:01.264 "trtype": "TCP", 00:17:01.264 "adrfam": "IPv4", 00:17:01.264 "traddr": "10.0.0.1", 00:17:01.264 "trsvcid": "36562" 00:17:01.264 }, 00:17:01.264 "auth": { 00:17:01.264 "state": "completed", 00:17:01.264 "digest": "sha256", 00:17:01.264 "dhgroup": "ffdhe6144" 00:17:01.264 } 00:17:01.264 } 00:17:01.264 ]' 00:17:01.264 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.264 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.264 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:01.525 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.466 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.466 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:02.466 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.466 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.466 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.467 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.726 00:17:02.727 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.727 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.727 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.987 { 00:17:02.987 "cntlid": 35, 00:17:02.987 "qid": 0, 00:17:02.987 "state": "enabled", 00:17:02.987 "thread": "nvmf_tgt_poll_group_000", 00:17:02.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.987 "listen_address": { 00:17:02.987 "trtype": "TCP", 00:17:02.987 "adrfam": "IPv4", 00:17:02.987 "traddr": "10.0.0.2", 00:17:02.987 "trsvcid": "4420" 00:17:02.987 }, 00:17:02.987 "peer_address": { 00:17:02.987 "trtype": "TCP", 00:17:02.987 "adrfam": "IPv4", 00:17:02.987 "traddr": "10.0.0.1", 00:17:02.987 "trsvcid": "36596" 00:17:02.987 }, 00:17:02.987 "auth": { 00:17:02.987 "state": "completed", 00:17:02.987 "digest": "sha256", 00:17:02.987 "dhgroup": "ffdhe6144" 00:17:02.987 } 00:17:02.987 } 00:17:02.987 ]' 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.987 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.247 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.247 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.247 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.247 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:03.247 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.191 19:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.451 00:17:04.451 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.451 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.451 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.713 { 00:17:04.713 "cntlid": 37, 00:17:04.713 "qid": 0, 00:17:04.713 "state": "enabled", 00:17:04.713 "thread": "nvmf_tgt_poll_group_000", 00:17:04.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.713 "listen_address": { 00:17:04.713 "trtype": "TCP", 00:17:04.713 "adrfam": "IPv4", 00:17:04.713 "traddr": "10.0.0.2", 00:17:04.713 "trsvcid": "4420" 00:17:04.713 }, 00:17:04.713 "peer_address": { 00:17:04.713 "trtype": "TCP", 00:17:04.713 "adrfam": "IPv4", 00:17:04.713 "traddr": "10.0.0.1", 00:17:04.713 "trsvcid": "53302" 00:17:04.713 }, 00:17:04.713 "auth": { 00:17:04.713 "state": "completed", 00:17:04.713 "digest": "sha256", 00:17:04.713 "dhgroup": "ffdhe6144" 00:17:04.713 } 00:17:04.713 } 00:17:04.713 ]' 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.713 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.975 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.975 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.975 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.975 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:04.975 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.913 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.173 00:17:06.173 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.173 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.173 19:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.434 { 00:17:06.434 "cntlid": 39, 00:17:06.434 "qid": 0, 00:17:06.434 "state": "enabled", 00:17:06.434 "thread": "nvmf_tgt_poll_group_000", 00:17:06.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.434 "listen_address": { 00:17:06.434 "trtype": "TCP", 00:17:06.434 "adrfam": "IPv4", 00:17:06.434 "traddr": "10.0.0.2", 00:17:06.434 "trsvcid": "4420" 00:17:06.434 }, 00:17:06.434 "peer_address": { 00:17:06.434 "trtype": "TCP", 00:17:06.434 "adrfam": "IPv4", 00:17:06.434 "traddr": "10.0.0.1", 00:17:06.434 "trsvcid": "53324" 00:17:06.434 }, 00:17:06.434 "auth": { 00:17:06.434 "state": "completed", 00:17:06.434 "digest": "sha256", 00:17:06.434 "dhgroup": "ffdhe6144" 00:17:06.434 } 00:17:06.434 } 00:17:06.434 ]' 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.434 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.694 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:06.694 19:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:07.266 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.526 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.098 00:17:08.098 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.098 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.098 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.360 { 00:17:08.360 "cntlid": 41, 00:17:08.360 "qid": 0, 00:17:08.360 "state": "enabled", 00:17:08.360 "thread": "nvmf_tgt_poll_group_000", 00:17:08.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.360 "listen_address": { 00:17:08.360 "trtype": "TCP", 00:17:08.360 "adrfam": "IPv4", 00:17:08.360 "traddr": "10.0.0.2", 00:17:08.360 "trsvcid": "4420" 00:17:08.360 }, 00:17:08.360 "peer_address": { 00:17:08.360 "trtype": "TCP", 00:17:08.360 "adrfam": "IPv4", 00:17:08.360 "traddr": "10.0.0.1", 00:17:08.360 "trsvcid": "53358" 00:17:08.360 }, 00:17:08.360 "auth": { 00:17:08.360 "state": "completed", 00:17:08.360 "digest": "sha256", 00:17:08.360 "dhgroup": "ffdhe8192" 00:17:08.360 } 00:17:08.360 } 00:17:08.360 ]' 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.360 19:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.360 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.621 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:08.621 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.193 19:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.454 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.025 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.025 { 00:17:10.025 "cntlid": 43, 00:17:10.025 "qid": 0, 00:17:10.025 "state": "enabled", 00:17:10.025 "thread": "nvmf_tgt_poll_group_000", 00:17:10.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.025 "listen_address": { 00:17:10.025 "trtype": "TCP", 00:17:10.025 "adrfam": "IPv4", 00:17:10.025 "traddr": "10.0.0.2", 00:17:10.025 "trsvcid": "4420" 00:17:10.025 }, 00:17:10.025 "peer_address": { 00:17:10.025 "trtype": "TCP", 00:17:10.025 "adrfam": "IPv4", 00:17:10.025 "traddr": "10.0.0.1", 00:17:10.025 "trsvcid": "53376" 00:17:10.025 }, 00:17:10.025 "auth": { 00:17:10.025 "state": "completed", 00:17:10.025 "digest": "sha256", 00:17:10.025 "dhgroup": "ffdhe8192" 00:17:10.025 } 00:17:10.025 } 00:17:10.025 ]' 00:17:10.025 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.286 19:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.546 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:10.546 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.117 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.377 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.378 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.378 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.378 19:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.638 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.938 { 00:17:11.938 "cntlid": 45, 00:17:11.938 "qid": 0, 00:17:11.938 "state": "enabled", 00:17:11.938 "thread": "nvmf_tgt_poll_group_000", 00:17:11.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.938 "listen_address": { 00:17:11.938 "trtype": "TCP", 00:17:11.938 "adrfam": "IPv4", 00:17:11.938 "traddr": "10.0.0.2", 00:17:11.938 "trsvcid": "4420" 00:17:11.938 }, 00:17:11.938 "peer_address": { 00:17:11.938 "trtype": "TCP", 00:17:11.938 "adrfam": "IPv4", 00:17:11.938 "traddr": "10.0.0.1", 00:17:11.938 "trsvcid": "53404" 00:17:11.938 }, 00:17:11.938 "auth": { 00:17:11.938 "state": "completed", 00:17:11.938 "digest": "sha256", 00:17:11.938 "dhgroup": "ffdhe8192" 00:17:11.938 } 00:17:11.938 } 00:17:11.938 ]' 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.938 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:12.201 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.276 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.537 00:17:13.537 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.537 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.537 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.798 { 00:17:13.798 "cntlid": 47, 00:17:13.798 "qid": 0, 00:17:13.798 "state": "enabled", 00:17:13.798 "thread": "nvmf_tgt_poll_group_000", 00:17:13.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.798 "listen_address": { 00:17:13.798 "trtype": "TCP", 00:17:13.798 "adrfam": "IPv4", 00:17:13.798 "traddr": "10.0.0.2", 00:17:13.798 "trsvcid": "4420" 00:17:13.798 }, 00:17:13.798 "peer_address": { 00:17:13.798 "trtype": "TCP", 00:17:13.798 "adrfam": "IPv4", 00:17:13.798 "traddr": "10.0.0.1", 00:17:13.798 "trsvcid": "53440" 00:17:13.798 }, 00:17:13.798 "auth": { 00:17:13.798 "state": "completed", 00:17:13.798 "digest": "sha256", 00:17:13.798 "dhgroup": "ffdhe8192" 00:17:13.798 } 00:17:13.798 } 00:17:13.798 ]' 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.798 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:14.059 19:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.001 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.262 00:17:15.262 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.262 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.262 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.522 { 00:17:15.522 "cntlid": 49, 00:17:15.522 "qid": 0, 00:17:15.522 "state": "enabled", 00:17:15.522 "thread": "nvmf_tgt_poll_group_000", 00:17:15.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.522 "listen_address": { 00:17:15.522 "trtype": "TCP", 00:17:15.522 "adrfam": "IPv4", 00:17:15.522 "traddr": "10.0.0.2", 00:17:15.522 "trsvcid": "4420" 00:17:15.522 }, 00:17:15.522 "peer_address": { 00:17:15.522 "trtype": "TCP", 00:17:15.522 "adrfam": "IPv4", 00:17:15.522 "traddr": "10.0.0.1", 00:17:15.522 "trsvcid": "46806" 00:17:15.522 }, 00:17:15.522 "auth": { 00:17:15.522 "state": "completed", 00:17:15.522 "digest": "sha384", 00:17:15.522 "dhgroup": "null" 00:17:15.522 } 00:17:15.522 } 00:17:15.522 ]' 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.522 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.783 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:15.783 19:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.354 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.616 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.877 00:17:16.877 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.877 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.877 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.137 { 00:17:17.137 "cntlid": 51, 00:17:17.137 "qid": 0, 00:17:17.137 "state": "enabled", 00:17:17.137 "thread": "nvmf_tgt_poll_group_000", 00:17:17.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.137 "listen_address": { 00:17:17.137 "trtype": "TCP", 00:17:17.137 "adrfam": "IPv4", 00:17:17.137 "traddr": "10.0.0.2", 00:17:17.137 "trsvcid": "4420" 00:17:17.137 }, 00:17:17.137 "peer_address": { 00:17:17.137 "trtype": "TCP", 00:17:17.137 "adrfam": "IPv4", 00:17:17.137 "traddr": "10.0.0.1", 00:17:17.137 "trsvcid": "46822" 00:17:17.137 }, 00:17:17.137 "auth": { 00:17:17.137 "state": "completed", 00:17:17.137 "digest": "sha384", 00:17:17.137 "dhgroup": "null" 00:17:17.137 } 00:17:17.137 } 00:17:17.137 ]' 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.137 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.398 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:17.398 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.969 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.230 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.491 00:17:18.491 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.491 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.491 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.752 { 00:17:18.752 "cntlid": 53, 00:17:18.752 "qid": 0, 00:17:18.752 "state": "enabled", 00:17:18.752 "thread": "nvmf_tgt_poll_group_000", 00:17:18.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.752 "listen_address": { 00:17:18.752 "trtype": "TCP", 00:17:18.752 "adrfam": "IPv4", 00:17:18.752 "traddr": "10.0.0.2", 00:17:18.752 "trsvcid": "4420" 00:17:18.752 }, 00:17:18.752 "peer_address": { 00:17:18.752 "trtype": "TCP", 00:17:18.752 "adrfam": "IPv4", 00:17:18.752 "traddr": "10.0.0.1", 00:17:18.752 "trsvcid": "46848" 00:17:18.752 }, 00:17:18.752 "auth": { 00:17:18.752 "state": "completed", 00:17:18.752 "digest": "sha384", 00:17:18.752 "dhgroup": "null" 00:17:18.752 } 00:17:18.752 } 00:17:18.752 ]' 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.752 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.013 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:19.013 19:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.584 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.846 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.106 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.106 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.367 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.367 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.367 { 00:17:20.367 "cntlid": 55, 00:17:20.367 "qid": 0, 00:17:20.367 "state": "enabled", 00:17:20.367 "thread": "nvmf_tgt_poll_group_000", 00:17:20.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.367 "listen_address": { 00:17:20.367 "trtype": "TCP", 00:17:20.367 "adrfam": "IPv4", 00:17:20.367 "traddr": "10.0.0.2", 00:17:20.367 "trsvcid": "4420" 00:17:20.367 }, 00:17:20.367 "peer_address": { 00:17:20.367 "trtype": "TCP", 00:17:20.367 "adrfam": "IPv4", 00:17:20.367 "traddr": "10.0.0.1", 00:17:20.367 "trsvcid": "46876" 00:17:20.367 }, 00:17:20.367 "auth": { 00:17:20.367 "state": "completed", 00:17:20.367 "digest": "sha384", 00:17:20.367 "dhgroup": "null" 00:17:20.367 } 00:17:20.367 } 00:17:20.367 ]' 00:17:20.367 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.367 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.367 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.367 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.367 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.367 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.367 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.367 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.628 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:20.628 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.202 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.463 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.723 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.723 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.723 { 00:17:21.723 "cntlid": 57, 00:17:21.723 "qid": 0, 00:17:21.724 "state": "enabled", 00:17:21.724 "thread": "nvmf_tgt_poll_group_000", 00:17:21.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.724 "listen_address": { 00:17:21.724 "trtype": "TCP", 00:17:21.724 "adrfam": "IPv4", 00:17:21.724 "traddr": "10.0.0.2", 00:17:21.724 "trsvcid": "4420" 00:17:21.724 }, 00:17:21.724 "peer_address": { 00:17:21.724 "trtype": "TCP", 00:17:21.724 "adrfam": "IPv4", 00:17:21.724 "traddr": "10.0.0.1", 00:17:21.724 "trsvcid": "46882" 00:17:21.724 }, 00:17:21.724 "auth": { 00:17:21.724 "state": "completed", 00:17:21.724 "digest": "sha384", 00:17:21.724 "dhgroup": "ffdhe2048" 00:17:21.724 } 00:17:21.724 } 00:17:21.724 ]' 00:17:21.724 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.984 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.244 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:22.244 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.815 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.075 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.335 00:17:23.335 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.335 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.336 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.336 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.597 { 00:17:23.597 "cntlid": 59, 00:17:23.597 "qid": 0, 00:17:23.597 "state": "enabled", 00:17:23.597 "thread": "nvmf_tgt_poll_group_000", 00:17:23.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.597 "listen_address": { 00:17:23.597 "trtype": "TCP", 00:17:23.597 "adrfam": "IPv4", 00:17:23.597 "traddr": "10.0.0.2", 00:17:23.597 "trsvcid": "4420" 00:17:23.597 }, 00:17:23.597 "peer_address": { 00:17:23.597 "trtype": "TCP", 00:17:23.597 "adrfam": "IPv4", 00:17:23.597 "traddr": "10.0.0.1", 00:17:23.597 "trsvcid": "46926" 00:17:23.597 }, 00:17:23.597 "auth": { 00:17:23.597 "state": "completed", 00:17:23.597 "digest": "sha384", 00:17:23.597 "dhgroup": "ffdhe2048" 00:17:23.597 } 00:17:23.597 } 00:17:23.597 ]' 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.597 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.858 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:23.858 19:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.428 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.688 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.948 00:17:24.948 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.948 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.948 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.208 { 00:17:25.208 "cntlid": 61, 00:17:25.208 "qid": 0, 00:17:25.208 "state": "enabled", 00:17:25.208 "thread": "nvmf_tgt_poll_group_000", 00:17:25.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.208 "listen_address": { 00:17:25.208 "trtype": "TCP", 00:17:25.208 "adrfam": "IPv4", 00:17:25.208 "traddr": "10.0.0.2", 00:17:25.208 "trsvcid": "4420" 00:17:25.208 }, 00:17:25.208 "peer_address": { 00:17:25.208 "trtype": "TCP", 00:17:25.208 "adrfam": "IPv4", 00:17:25.208 "traddr": "10.0.0.1", 00:17:25.208 "trsvcid": "45360" 00:17:25.208 }, 00:17:25.208 "auth": { 00:17:25.208 "state": "completed", 00:17:25.208 "digest": "sha384", 00:17:25.208 "dhgroup": "ffdhe2048" 00:17:25.208 } 00:17:25.208 } 00:17:25.208 ]' 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.208 19:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.468 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:25.468 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.040 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.300 19:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.560 00:17:26.560 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.560 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.560 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.820 { 00:17:26.820 "cntlid": 63, 00:17:26.820 "qid": 0, 00:17:26.820 "state": "enabled", 00:17:26.820 "thread": "nvmf_tgt_poll_group_000", 00:17:26.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.820 "listen_address": { 00:17:26.820 "trtype": "TCP", 00:17:26.820 "adrfam": "IPv4", 00:17:26.820 "traddr": "10.0.0.2", 00:17:26.820 "trsvcid": "4420" 00:17:26.820 }, 00:17:26.820 "peer_address": { 00:17:26.820 "trtype": "TCP", 00:17:26.820 "adrfam": "IPv4", 00:17:26.820 "traddr": "10.0.0.1", 00:17:26.820 "trsvcid": "45398" 00:17:26.820 }, 00:17:26.820 "auth": { 00:17:26.820 "state": "completed", 00:17:26.820 "digest": "sha384", 00:17:26.820 "dhgroup": "ffdhe2048" 00:17:26.820 } 00:17:26.820 } 00:17:26.820 ]' 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.820 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.081 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:27.081 19:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.651 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.912 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.913 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.173 00:17:28.173 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.173 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.173 19:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.438 { 00:17:28.438 "cntlid": 65, 00:17:28.438 "qid": 0, 00:17:28.438 "state": "enabled", 00:17:28.438 "thread": "nvmf_tgt_poll_group_000", 00:17:28.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.438 "listen_address": { 00:17:28.438 "trtype": "TCP", 00:17:28.438 "adrfam": "IPv4", 00:17:28.438 "traddr": "10.0.0.2", 00:17:28.438 "trsvcid": "4420" 00:17:28.438 }, 00:17:28.438 "peer_address": { 00:17:28.438 "trtype": "TCP", 00:17:28.438 "adrfam": "IPv4", 00:17:28.438 "traddr": "10.0.0.1", 00:17:28.438 "trsvcid": "45418" 00:17:28.438 }, 00:17:28.438 "auth": { 00:17:28.438 "state": "completed", 00:17:28.438 "digest": "sha384", 00:17:28.438 "dhgroup": "ffdhe3072" 00:17:28.438 } 00:17:28.438 } 00:17:28.438 ]' 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.438 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.700 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:28.700 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:29.273 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.273 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.533 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.793 00:17:29.793 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.793 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.793 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.053 { 00:17:30.053 "cntlid": 67, 00:17:30.053 "qid": 0, 00:17:30.053 "state": "enabled", 00:17:30.053 "thread": "nvmf_tgt_poll_group_000", 00:17:30.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.053 "listen_address": { 00:17:30.053 "trtype": "TCP", 00:17:30.053 "adrfam": "IPv4", 00:17:30.053 "traddr": "10.0.0.2", 00:17:30.053 "trsvcid": "4420" 00:17:30.053 }, 00:17:30.053 "peer_address": { 00:17:30.053 "trtype": "TCP", 00:17:30.053 "adrfam": "IPv4", 00:17:30.053 "traddr": "10.0.0.1", 00:17:30.053 "trsvcid": "45446" 00:17:30.053 }, 00:17:30.053 "auth": { 00:17:30.053 "state": "completed", 00:17:30.053 "digest": "sha384", 00:17:30.053 "dhgroup": "ffdhe3072" 00:17:30.053 } 00:17:30.053 } 00:17:30.053 ]' 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.053 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.313 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:30.313 19:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.883 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.143 19:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.402 00:17:31.402 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.402 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.402 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.662 { 00:17:31.662 "cntlid": 69, 00:17:31.662 "qid": 0, 00:17:31.662 "state": "enabled", 00:17:31.662 "thread": "nvmf_tgt_poll_group_000", 00:17:31.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.662 "listen_address": { 00:17:31.662 "trtype": "TCP", 00:17:31.662 "adrfam": "IPv4", 00:17:31.662 "traddr": "10.0.0.2", 00:17:31.662 "trsvcid": "4420" 00:17:31.662 }, 00:17:31.662 "peer_address": { 00:17:31.662 "trtype": "TCP", 00:17:31.662 "adrfam": "IPv4", 00:17:31.662 "traddr": "10.0.0.1", 00:17:31.662 "trsvcid": "45486" 00:17:31.662 }, 00:17:31.662 "auth": { 00:17:31.662 "state": "completed", 00:17:31.662 "digest": "sha384", 00:17:31.662 "dhgroup": "ffdhe3072" 00:17:31.662 } 00:17:31.662 } 00:17:31.662 ]' 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.662 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.923 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:31.923 19:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.496 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.756 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.757 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.017 00:17:33.017 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.017 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.017 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.278 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.278 { 00:17:33.278 "cntlid": 71, 00:17:33.278 "qid": 0, 00:17:33.278 "state": "enabled", 00:17:33.278 "thread": "nvmf_tgt_poll_group_000", 00:17:33.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.278 "listen_address": { 00:17:33.278 "trtype": "TCP", 00:17:33.278 "adrfam": "IPv4", 00:17:33.278 "traddr": "10.0.0.2", 00:17:33.278 "trsvcid": "4420" 00:17:33.278 }, 00:17:33.278 "peer_address": { 00:17:33.278 "trtype": "TCP", 00:17:33.278 "adrfam": "IPv4", 00:17:33.279 "traddr": "10.0.0.1", 00:17:33.279 "trsvcid": "45508" 00:17:33.279 }, 00:17:33.279 "auth": { 00:17:33.279 "state": "completed", 00:17:33.279 "digest": "sha384", 00:17:33.279 "dhgroup": "ffdhe3072" 00:17:33.279 } 00:17:33.279 } 00:17:33.279 ]' 00:17:33.279 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.279 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.279 19:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.279 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.279 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.279 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.279 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.279 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.540 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:33.540 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:34.110 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.111 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.371 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.372 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.372 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.633 00:17:34.633 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.633 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.633 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.892 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.892 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.892 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.892 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.893 { 00:17:34.893 "cntlid": 73, 00:17:34.893 "qid": 0, 00:17:34.893 "state": "enabled", 00:17:34.893 "thread": "nvmf_tgt_poll_group_000", 00:17:34.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.893 "listen_address": { 00:17:34.893 "trtype": "TCP", 00:17:34.893 "adrfam": "IPv4", 00:17:34.893 "traddr": "10.0.0.2", 00:17:34.893 "trsvcid": "4420" 00:17:34.893 }, 00:17:34.893 "peer_address": { 00:17:34.893 "trtype": "TCP", 00:17:34.893 "adrfam": "IPv4", 00:17:34.893 "traddr": "10.0.0.1", 00:17:34.893 "trsvcid": "36942" 00:17:34.893 }, 00:17:34.893 "auth": { 00:17:34.893 "state": "completed", 00:17:34.893 "digest": "sha384", 00:17:34.893 "dhgroup": "ffdhe4096" 00:17:34.893 } 00:17:34.893 } 00:17:34.893 ]' 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.893 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.152 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.152 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.152 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.152 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:35.152 19:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.094 19:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.355 00:17:36.355 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.355 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.355 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.616 { 00:17:36.616 "cntlid": 75, 00:17:36.616 "qid": 0, 00:17:36.616 "state": "enabled", 00:17:36.616 "thread": "nvmf_tgt_poll_group_000", 00:17:36.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.616 "listen_address": { 00:17:36.616 "trtype": "TCP", 00:17:36.616 "adrfam": "IPv4", 00:17:36.616 "traddr": "10.0.0.2", 00:17:36.616 "trsvcid": "4420" 00:17:36.616 }, 00:17:36.616 "peer_address": { 00:17:36.616 "trtype": "TCP", 00:17:36.616 "adrfam": "IPv4", 00:17:36.616 "traddr": "10.0.0.1", 00:17:36.616 "trsvcid": "36978" 00:17:36.616 }, 00:17:36.616 "auth": { 00:17:36.616 "state": "completed", 00:17:36.616 "digest": "sha384", 00:17:36.616 "dhgroup": "ffdhe4096" 00:17:36.616 } 00:17:36.616 } 00:17:36.616 ]' 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.616 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.877 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:36.877 19:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.450 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.711 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.973 00:17:37.973 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.973 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.973 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.234 { 00:17:38.234 "cntlid": 77, 00:17:38.234 "qid": 0, 00:17:38.234 "state": "enabled", 00:17:38.234 "thread": "nvmf_tgt_poll_group_000", 00:17:38.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.234 "listen_address": { 00:17:38.234 "trtype": "TCP", 00:17:38.234 "adrfam": "IPv4", 00:17:38.234 "traddr": "10.0.0.2", 00:17:38.234 "trsvcid": "4420" 00:17:38.234 }, 00:17:38.234 "peer_address": { 00:17:38.234 "trtype": "TCP", 00:17:38.234 "adrfam": "IPv4", 00:17:38.234 "traddr": "10.0.0.1", 00:17:38.234 "trsvcid": "37000" 00:17:38.234 }, 00:17:38.234 "auth": { 00:17:38.234 "state": "completed", 00:17:38.234 "digest": "sha384", 00:17:38.234 "dhgroup": "ffdhe4096" 00:17:38.234 } 00:17:38.234 } 00:17:38.234 ]' 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.234 19:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.234 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.234 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.234 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.495 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:38.495 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.066 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.327 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.587 00:17:39.588 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.588 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.588 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.848 { 00:17:39.848 "cntlid": 79, 00:17:39.848 "qid": 0, 00:17:39.848 "state": "enabled", 00:17:39.848 "thread": "nvmf_tgt_poll_group_000", 00:17:39.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.848 "listen_address": { 00:17:39.848 "trtype": "TCP", 00:17:39.848 "adrfam": "IPv4", 00:17:39.848 "traddr": "10.0.0.2", 00:17:39.848 "trsvcid": "4420" 00:17:39.848 }, 00:17:39.848 "peer_address": { 00:17:39.848 "trtype": "TCP", 00:17:39.848 "adrfam": "IPv4", 00:17:39.848 "traddr": "10.0.0.1", 00:17:39.848 "trsvcid": "37016" 00:17:39.848 }, 00:17:39.848 "auth": { 00:17:39.848 "state": "completed", 00:17:39.848 "digest": "sha384", 00:17:39.848 "dhgroup": "ffdhe4096" 00:17:39.848 } 00:17:39.848 } 00:17:39.848 ]' 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.848 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.108 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.108 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.108 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.108 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:40.108 19:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:40.677 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.937 19:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.197 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.456 { 00:17:41.456 "cntlid": 81, 00:17:41.456 "qid": 0, 00:17:41.456 "state": "enabled", 00:17:41.456 "thread": "nvmf_tgt_poll_group_000", 00:17:41.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.456 "listen_address": { 00:17:41.456 "trtype": "TCP", 00:17:41.456 "adrfam": "IPv4", 00:17:41.456 "traddr": "10.0.0.2", 00:17:41.456 "trsvcid": "4420" 00:17:41.456 }, 00:17:41.456 "peer_address": { 00:17:41.456 "trtype": "TCP", 00:17:41.456 "adrfam": "IPv4", 00:17:41.456 "traddr": "10.0.0.1", 00:17:41.456 "trsvcid": "37042" 00:17:41.456 }, 00:17:41.456 "auth": { 00:17:41.456 "state": "completed", 00:17:41.456 "digest": "sha384", 00:17:41.456 "dhgroup": "ffdhe6144" 00:17:41.456 } 00:17:41.456 } 00:17:41.456 ]' 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.456 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.715 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.715 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.715 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.715 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.715 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.027 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:42.027 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:42.594 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.594 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.594 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.594 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.595 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.595 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.595 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.595 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.853 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.854 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.111 00:17:43.111 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.111 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.111 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.369 { 00:17:43.369 "cntlid": 83, 00:17:43.369 "qid": 0, 00:17:43.369 "state": "enabled", 00:17:43.369 "thread": "nvmf_tgt_poll_group_000", 00:17:43.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.369 "listen_address": { 00:17:43.369 "trtype": "TCP", 00:17:43.369 "adrfam": "IPv4", 00:17:43.369 "traddr": "10.0.0.2", 00:17:43.369 "trsvcid": "4420" 00:17:43.369 }, 00:17:43.369 "peer_address": { 00:17:43.369 "trtype": "TCP", 00:17:43.369 "adrfam": "IPv4", 00:17:43.369 "traddr": "10.0.0.1", 00:17:43.369 "trsvcid": "37070" 00:17:43.369 }, 00:17:43.369 "auth": { 00:17:43.369 "state": "completed", 00:17:43.369 "digest": "sha384", 00:17:43.369 "dhgroup": "ffdhe6144" 00:17:43.369 } 00:17:43.369 } 00:17:43.369 ]' 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.369 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.370 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.370 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.370 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.370 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.370 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.629 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:43.629 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:44.198 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.198 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.198 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.199 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.199 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.199 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.199 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.199 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.459 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.719 00:17:44.719 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.719 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.719 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.979 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.979 { 00:17:44.979 "cntlid": 85, 00:17:44.979 "qid": 0, 00:17:44.979 "state": "enabled", 00:17:44.980 "thread": "nvmf_tgt_poll_group_000", 00:17:44.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.980 "listen_address": { 00:17:44.980 "trtype": "TCP", 00:17:44.980 "adrfam": "IPv4", 00:17:44.980 "traddr": "10.0.0.2", 00:17:44.980 "trsvcid": "4420" 00:17:44.980 }, 00:17:44.980 "peer_address": { 00:17:44.980 "trtype": "TCP", 00:17:44.980 "adrfam": "IPv4", 00:17:44.980 "traddr": "10.0.0.1", 00:17:44.980 "trsvcid": "34200" 00:17:44.980 }, 00:17:44.980 "auth": { 00:17:44.980 "state": "completed", 00:17:44.980 "digest": "sha384", 00:17:44.980 "dhgroup": "ffdhe6144" 00:17:44.980 } 00:17:44.980 } 00:17:44.980 ]' 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.980 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.241 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:45.241 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:45.811 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.073 19:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.332 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.592 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.592 { 00:17:46.592 "cntlid": 87, 00:17:46.592 "qid": 0, 00:17:46.592 "state": "enabled", 00:17:46.592 "thread": "nvmf_tgt_poll_group_000", 00:17:46.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.592 "listen_address": { 00:17:46.592 "trtype": "TCP", 00:17:46.592 "adrfam": "IPv4", 00:17:46.592 "traddr": "10.0.0.2", 00:17:46.592 "trsvcid": "4420" 00:17:46.592 }, 00:17:46.592 "peer_address": { 00:17:46.592 "trtype": "TCP", 00:17:46.592 "adrfam": "IPv4", 00:17:46.592 "traddr": "10.0.0.1", 00:17:46.592 "trsvcid": "34236" 00:17:46.592 }, 00:17:46.592 "auth": { 00:17:46.592 "state": "completed", 00:17:46.592 "digest": "sha384", 00:17:46.593 "dhgroup": "ffdhe6144" 00:17:46.593 } 00:17:46.593 } 00:17:46.593 ]' 00:17:46.593 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.593 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.593 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:46.852 19:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:47.793 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.794 19:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.363 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.364 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.622 { 00:17:48.622 "cntlid": 89, 00:17:48.622 "qid": 0, 00:17:48.622 "state": "enabled", 00:17:48.622 "thread": "nvmf_tgt_poll_group_000", 00:17:48.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.622 "listen_address": { 00:17:48.622 "trtype": "TCP", 00:17:48.622 "adrfam": "IPv4", 00:17:48.622 "traddr": "10.0.0.2", 00:17:48.622 "trsvcid": "4420" 00:17:48.622 }, 00:17:48.622 "peer_address": { 00:17:48.622 "trtype": "TCP", 00:17:48.622 "adrfam": "IPv4", 00:17:48.622 "traddr": "10.0.0.1", 00:17:48.622 "trsvcid": "34274" 00:17:48.622 }, 00:17:48.622 "auth": { 00:17:48.622 "state": "completed", 00:17:48.622 "digest": "sha384", 00:17:48.622 "dhgroup": "ffdhe8192" 00:17:48.622 } 00:17:48.622 } 00:17:48.622 ]' 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.622 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.881 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:48.881 19:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.452 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.714 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.283 00:17:50.283 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.283 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.283 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.283 { 00:17:50.283 "cntlid": 91, 00:17:50.283 "qid": 0, 00:17:50.283 "state": "enabled", 00:17:50.283 "thread": "nvmf_tgt_poll_group_000", 00:17:50.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.283 "listen_address": { 00:17:50.283 "trtype": "TCP", 00:17:50.283 "adrfam": "IPv4", 00:17:50.283 "traddr": "10.0.0.2", 00:17:50.283 "trsvcid": "4420" 00:17:50.283 }, 00:17:50.283 "peer_address": { 00:17:50.283 "trtype": "TCP", 00:17:50.283 "adrfam": "IPv4", 00:17:50.283 "traddr": "10.0.0.1", 00:17:50.283 "trsvcid": "34310" 00:17:50.283 }, 00:17:50.283 "auth": { 00:17:50.283 "state": "completed", 00:17:50.283 "digest": "sha384", 00:17:50.283 "dhgroup": "ffdhe8192" 00:17:50.283 } 00:17:50.283 } 00:17:50.283 ]' 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.283 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:50.664 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.257 19:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.517 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.518 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.088 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.088 { 00:17:52.088 "cntlid": 93, 00:17:52.088 "qid": 0, 00:17:52.088 "state": "enabled", 00:17:52.088 "thread": "nvmf_tgt_poll_group_000", 00:17:52.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.088 "listen_address": { 00:17:52.088 "trtype": "TCP", 00:17:52.088 "adrfam": "IPv4", 00:17:52.088 "traddr": "10.0.0.2", 00:17:52.088 "trsvcid": "4420" 00:17:52.088 }, 00:17:52.088 "peer_address": { 00:17:52.088 "trtype": "TCP", 00:17:52.088 "adrfam": "IPv4", 00:17:52.088 "traddr": "10.0.0.1", 00:17:52.088 "trsvcid": "34334" 00:17:52.088 }, 00:17:52.088 "auth": { 00:17:52.088 "state": "completed", 00:17:52.088 "digest": "sha384", 00:17:52.088 "dhgroup": "ffdhe8192" 00:17:52.088 } 00:17:52.088 } 00:17:52.088 ]' 00:17:52.088 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.368 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.368 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.368 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.369 19:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.369 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.369 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.369 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.627 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:52.627 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.197 19:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.458 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.720 00:17:53.720 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.720 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.720 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.982 { 00:17:53.982 "cntlid": 95, 00:17:53.982 "qid": 0, 00:17:53.982 "state": "enabled", 00:17:53.982 "thread": "nvmf_tgt_poll_group_000", 00:17:53.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.982 "listen_address": { 00:17:53.982 "trtype": "TCP", 00:17:53.982 "adrfam": "IPv4", 00:17:53.982 "traddr": "10.0.0.2", 00:17:53.982 "trsvcid": "4420" 00:17:53.982 }, 00:17:53.982 "peer_address": { 00:17:53.982 "trtype": "TCP", 00:17:53.982 "adrfam": "IPv4", 00:17:53.982 "traddr": "10.0.0.1", 00:17:53.982 "trsvcid": "34362" 00:17:53.982 }, 00:17:53.982 "auth": { 00:17:53.982 "state": "completed", 00:17:53.982 "digest": "sha384", 00:17:53.982 "dhgroup": "ffdhe8192" 00:17:53.982 } 00:17:53.982 } 00:17:53.982 ]' 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.982 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.244 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.244 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.244 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.244 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.244 19:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.244 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:54.244 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.184 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.446 00:17:55.446 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.446 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.446 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.707 { 00:17:55.707 "cntlid": 97, 00:17:55.707 "qid": 0, 00:17:55.707 "state": "enabled", 00:17:55.707 "thread": "nvmf_tgt_poll_group_000", 00:17:55.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.707 "listen_address": { 00:17:55.707 "trtype": "TCP", 00:17:55.707 "adrfam": "IPv4", 00:17:55.707 "traddr": "10.0.0.2", 00:17:55.707 "trsvcid": "4420" 00:17:55.707 }, 00:17:55.707 "peer_address": { 00:17:55.707 "trtype": "TCP", 00:17:55.707 "adrfam": "IPv4", 00:17:55.707 "traddr": "10.0.0.1", 00:17:55.707 "trsvcid": "33098" 00:17:55.707 }, 00:17:55.707 "auth": { 00:17:55.707 "state": "completed", 00:17:55.707 "digest": "sha512", 00:17:55.707 "dhgroup": "null" 00:17:55.707 } 00:17:55.707 } 00:17:55.707 ]' 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.707 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.967 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:55.967 19:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:17:56.537 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.537 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.537 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.537 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.796 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.056 00:17:57.056 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.056 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.056 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.317 { 00:17:57.317 "cntlid": 99, 00:17:57.317 "qid": 0, 00:17:57.317 "state": "enabled", 00:17:57.317 "thread": "nvmf_tgt_poll_group_000", 00:17:57.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.317 "listen_address": { 00:17:57.317 "trtype": "TCP", 00:17:57.317 "adrfam": "IPv4", 00:17:57.317 "traddr": "10.0.0.2", 00:17:57.317 "trsvcid": "4420" 00:17:57.317 }, 00:17:57.317 "peer_address": { 00:17:57.317 "trtype": "TCP", 00:17:57.317 "adrfam": "IPv4", 00:17:57.317 "traddr": "10.0.0.1", 00:17:57.317 "trsvcid": "33128" 00:17:57.317 }, 00:17:57.317 "auth": { 00:17:57.317 "state": "completed", 00:17:57.317 "digest": "sha512", 00:17:57.317 "dhgroup": "null" 00:17:57.317 } 00:17:57.317 } 00:17:57.317 ]' 00:17:57.317 19:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.317 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.577 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:57.578 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:17:58.148 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.148 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.148 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.148 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.409 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.409 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.409 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.409 19:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.409 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.671 00:17:58.671 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.671 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.671 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.932 { 00:17:58.932 "cntlid": 101, 00:17:58.932 "qid": 0, 00:17:58.932 "state": "enabled", 00:17:58.932 "thread": "nvmf_tgt_poll_group_000", 00:17:58.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.932 "listen_address": { 00:17:58.932 "trtype": "TCP", 00:17:58.932 "adrfam": "IPv4", 00:17:58.932 "traddr": "10.0.0.2", 00:17:58.932 "trsvcid": "4420" 00:17:58.932 }, 00:17:58.932 "peer_address": { 00:17:58.932 "trtype": "TCP", 00:17:58.932 "adrfam": "IPv4", 00:17:58.932 "traddr": "10.0.0.1", 00:17:58.932 "trsvcid": "33164" 00:17:58.932 }, 00:17:58.932 "auth": { 00:17:58.932 "state": "completed", 00:17:58.932 "digest": "sha512", 00:17:58.932 "dhgroup": "null" 00:17:58.932 } 00:17:58.932 } 00:17:58.932 ]' 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.932 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.194 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:59.194 19:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.766 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.026 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.286 00:18:00.286 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.286 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.286 19:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.547 { 00:18:00.547 "cntlid": 103, 00:18:00.547 "qid": 0, 00:18:00.547 "state": "enabled", 00:18:00.547 "thread": "nvmf_tgt_poll_group_000", 00:18:00.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.547 "listen_address": { 00:18:00.547 "trtype": "TCP", 00:18:00.547 "adrfam": "IPv4", 00:18:00.547 "traddr": "10.0.0.2", 00:18:00.547 "trsvcid": "4420" 00:18:00.547 }, 00:18:00.547 "peer_address": { 00:18:00.547 "trtype": "TCP", 00:18:00.547 "adrfam": "IPv4", 00:18:00.547 "traddr": "10.0.0.1", 00:18:00.547 "trsvcid": "33188" 00:18:00.547 }, 00:18:00.547 "auth": { 00:18:00.547 "state": "completed", 00:18:00.547 "digest": "sha512", 00:18:00.547 "dhgroup": "null" 00:18:00.547 } 00:18:00.547 } 00:18:00.547 ]' 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.547 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.548 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.548 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.548 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.809 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:00.809 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.379 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.641 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.902 00:18:01.902 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.902 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.902 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.164 { 00:18:02.164 "cntlid": 105, 00:18:02.164 "qid": 0, 00:18:02.164 "state": "enabled", 00:18:02.164 "thread": "nvmf_tgt_poll_group_000", 00:18:02.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.164 "listen_address": { 00:18:02.164 "trtype": "TCP", 00:18:02.164 "adrfam": "IPv4", 00:18:02.164 "traddr": "10.0.0.2", 00:18:02.164 "trsvcid": "4420" 00:18:02.164 }, 00:18:02.164 "peer_address": { 00:18:02.164 "trtype": "TCP", 00:18:02.164 "adrfam": "IPv4", 00:18:02.164 "traddr": "10.0.0.1", 00:18:02.164 "trsvcid": "33222" 00:18:02.164 }, 00:18:02.164 "auth": { 00:18:02.164 "state": "completed", 00:18:02.164 "digest": "sha512", 00:18:02.164 "dhgroup": "ffdhe2048" 00:18:02.164 } 00:18:02.164 } 00:18:02.164 ]' 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.164 19:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.424 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:02.424 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.995 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.996 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.258 19:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.519 00:18:03.519 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.519 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.519 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.780 { 00:18:03.780 "cntlid": 107, 00:18:03.780 "qid": 0, 00:18:03.780 "state": "enabled", 00:18:03.780 "thread": "nvmf_tgt_poll_group_000", 00:18:03.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.780 "listen_address": { 00:18:03.780 "trtype": "TCP", 00:18:03.780 "adrfam": "IPv4", 00:18:03.780 "traddr": "10.0.0.2", 00:18:03.780 "trsvcid": "4420" 00:18:03.780 }, 00:18:03.780 "peer_address": { 00:18:03.780 "trtype": "TCP", 00:18:03.780 "adrfam": "IPv4", 00:18:03.780 "traddr": "10.0.0.1", 00:18:03.780 "trsvcid": "33242" 00:18:03.780 }, 00:18:03.780 "auth": { 00:18:03.780 "state": "completed", 00:18:03.780 "digest": "sha512", 00:18:03.780 "dhgroup": "ffdhe2048" 00:18:03.780 } 00:18:03.780 } 00:18:03.780 ]' 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.780 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.043 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:04.043 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.614 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.876 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.136 00:18:05.136 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.136 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.136 19:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.397 { 00:18:05.397 "cntlid": 109, 00:18:05.397 "qid": 0, 00:18:05.397 "state": "enabled", 00:18:05.397 "thread": "nvmf_tgt_poll_group_000", 00:18:05.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.397 "listen_address": { 00:18:05.397 "trtype": "TCP", 00:18:05.397 "adrfam": "IPv4", 00:18:05.397 "traddr": "10.0.0.2", 00:18:05.397 "trsvcid": "4420" 00:18:05.397 }, 00:18:05.397 "peer_address": { 00:18:05.397 "trtype": "TCP", 00:18:05.397 "adrfam": "IPv4", 00:18:05.397 "traddr": "10.0.0.1", 00:18:05.397 "trsvcid": "42346" 00:18:05.397 }, 00:18:05.397 "auth": { 00:18:05.397 "state": "completed", 00:18:05.397 "digest": "sha512", 00:18:05.397 "dhgroup": "ffdhe2048" 00:18:05.397 } 00:18:05.397 } 00:18:05.397 ]' 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.397 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.658 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:05.658 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:06.228 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.228 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.488 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.750 00:18:06.750 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.750 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.750 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.012 { 00:18:07.012 "cntlid": 111, 00:18:07.012 "qid": 0, 00:18:07.012 "state": "enabled", 00:18:07.012 "thread": "nvmf_tgt_poll_group_000", 00:18:07.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.012 "listen_address": { 00:18:07.012 "trtype": "TCP", 00:18:07.012 "adrfam": "IPv4", 00:18:07.012 "traddr": "10.0.0.2", 00:18:07.012 "trsvcid": "4420" 00:18:07.012 }, 00:18:07.012 "peer_address": { 00:18:07.012 "trtype": "TCP", 00:18:07.012 "adrfam": "IPv4", 00:18:07.012 "traddr": "10.0.0.1", 00:18:07.012 "trsvcid": "42386" 00:18:07.012 }, 00:18:07.012 "auth": { 00:18:07.012 "state": "completed", 00:18:07.012 "digest": "sha512", 00:18:07.012 "dhgroup": "ffdhe2048" 00:18:07.012 } 00:18:07.012 } 00:18:07.012 ]' 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.012 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.273 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:07.273 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.845 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.106 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.367 00:18:08.367 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.367 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.367 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.627 { 00:18:08.627 "cntlid": 113, 00:18:08.627 "qid": 0, 00:18:08.627 "state": "enabled", 00:18:08.627 "thread": "nvmf_tgt_poll_group_000", 00:18:08.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.627 "listen_address": { 00:18:08.627 "trtype": "TCP", 00:18:08.627 "adrfam": "IPv4", 00:18:08.627 "traddr": "10.0.0.2", 00:18:08.627 "trsvcid": "4420" 00:18:08.627 }, 00:18:08.627 "peer_address": { 00:18:08.627 "trtype": "TCP", 00:18:08.627 "adrfam": "IPv4", 00:18:08.627 "traddr": "10.0.0.1", 00:18:08.627 "trsvcid": "42410" 00:18:08.627 }, 00:18:08.627 "auth": { 00:18:08.627 "state": "completed", 00:18:08.627 "digest": "sha512", 00:18:08.627 "dhgroup": "ffdhe3072" 00:18:08.627 } 00:18:08.627 } 00:18:08.627 ]' 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.627 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.888 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:08.888 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:09.459 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.459 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.459 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.459 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.459 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.718 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.979 00:18:09.979 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.979 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.979 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.240 { 00:18:10.240 "cntlid": 115, 00:18:10.240 "qid": 0, 00:18:10.240 "state": "enabled", 00:18:10.240 "thread": "nvmf_tgt_poll_group_000", 00:18:10.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.240 "listen_address": { 00:18:10.240 "trtype": "TCP", 00:18:10.240 "adrfam": "IPv4", 00:18:10.240 "traddr": "10.0.0.2", 00:18:10.240 "trsvcid": "4420" 00:18:10.240 }, 00:18:10.240 "peer_address": { 00:18:10.240 "trtype": "TCP", 00:18:10.240 "adrfam": "IPv4", 00:18:10.240 "traddr": "10.0.0.1", 00:18:10.240 "trsvcid": "42430" 00:18:10.240 }, 00:18:10.240 "auth": { 00:18:10.240 "state": "completed", 00:18:10.240 "digest": "sha512", 00:18:10.240 "dhgroup": "ffdhe3072" 00:18:10.240 } 00:18:10.240 } 00:18:10.240 ]' 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.240 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.241 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.241 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.241 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.241 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.241 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.502 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:10.502 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:11.074 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.074 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.074 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.074 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.335 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.335 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.335 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.335 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.335 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.596 00:18:11.596 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.596 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.596 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.856 { 00:18:11.856 "cntlid": 117, 00:18:11.856 "qid": 0, 00:18:11.856 "state": "enabled", 00:18:11.856 "thread": "nvmf_tgt_poll_group_000", 00:18:11.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.856 "listen_address": { 00:18:11.856 "trtype": "TCP", 00:18:11.856 "adrfam": "IPv4", 00:18:11.856 "traddr": "10.0.0.2", 00:18:11.856 "trsvcid": "4420" 00:18:11.856 }, 00:18:11.856 "peer_address": { 00:18:11.856 "trtype": "TCP", 00:18:11.856 "adrfam": "IPv4", 00:18:11.856 "traddr": "10.0.0.1", 00:18:11.856 "trsvcid": "42450" 00:18:11.856 }, 00:18:11.856 "auth": { 00:18:11.856 "state": "completed", 00:18:11.856 "digest": "sha512", 00:18:11.856 "dhgroup": "ffdhe3072" 00:18:11.856 } 00:18:11.856 } 00:18:11.856 ]' 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.117 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:12.117 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:12.687 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.688 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.688 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.688 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.948 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.208 00:18:13.208 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.208 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.208 19:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.472 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.473 { 00:18:13.473 "cntlid": 119, 00:18:13.473 "qid": 0, 00:18:13.473 "state": "enabled", 00:18:13.473 "thread": "nvmf_tgt_poll_group_000", 00:18:13.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.473 "listen_address": { 00:18:13.473 "trtype": "TCP", 00:18:13.473 "adrfam": "IPv4", 00:18:13.473 "traddr": "10.0.0.2", 00:18:13.473 "trsvcid": "4420" 00:18:13.473 }, 00:18:13.473 "peer_address": { 00:18:13.473 "trtype": "TCP", 00:18:13.473 "adrfam": "IPv4", 00:18:13.473 "traddr": "10.0.0.1", 00:18:13.473 "trsvcid": "42478" 00:18:13.473 }, 00:18:13.473 "auth": { 00:18:13.473 "state": "completed", 00:18:13.473 "digest": "sha512", 00:18:13.473 "dhgroup": "ffdhe3072" 00:18:13.473 } 00:18:13.473 } 00:18:13.473 ]' 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.473 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.733 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:13.733 19:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.304 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.564 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.565 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.565 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.565 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.825 00:18:14.825 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.825 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.825 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.085 { 00:18:15.085 "cntlid": 121, 00:18:15.085 "qid": 0, 00:18:15.085 "state": "enabled", 00:18:15.085 "thread": "nvmf_tgt_poll_group_000", 00:18:15.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.085 "listen_address": { 00:18:15.085 "trtype": "TCP", 00:18:15.085 "adrfam": "IPv4", 00:18:15.085 "traddr": "10.0.0.2", 00:18:15.085 "trsvcid": "4420" 00:18:15.085 }, 00:18:15.085 "peer_address": { 00:18:15.085 "trtype": "TCP", 00:18:15.085 "adrfam": "IPv4", 00:18:15.085 "traddr": "10.0.0.1", 00:18:15.085 "trsvcid": "38752" 00:18:15.085 }, 00:18:15.085 "auth": { 00:18:15.085 "state": "completed", 00:18:15.085 "digest": "sha512", 00:18:15.085 "dhgroup": "ffdhe4096" 00:18:15.085 } 00:18:15.085 } 00:18:15.085 ]' 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.085 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.086 19:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.347 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:15.347 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:15.917 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.917 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.917 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.917 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.177 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.438 00:18:16.438 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.438 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.438 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.698 { 00:18:16.698 "cntlid": 123, 00:18:16.698 "qid": 0, 00:18:16.698 "state": "enabled", 00:18:16.698 "thread": "nvmf_tgt_poll_group_000", 00:18:16.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.698 "listen_address": { 00:18:16.698 "trtype": "TCP", 00:18:16.698 "adrfam": "IPv4", 00:18:16.698 "traddr": "10.0.0.2", 00:18:16.698 "trsvcid": "4420" 00:18:16.698 }, 00:18:16.698 "peer_address": { 00:18:16.698 "trtype": "TCP", 00:18:16.698 "adrfam": "IPv4", 00:18:16.698 "traddr": "10.0.0.1", 00:18:16.698 "trsvcid": "38776" 00:18:16.698 }, 00:18:16.698 "auth": { 00:18:16.698 "state": "completed", 00:18:16.698 "digest": "sha512", 00:18:16.698 "dhgroup": "ffdhe4096" 00:18:16.698 } 00:18:16.698 } 00:18:16.698 ]' 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.698 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.959 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.959 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.959 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.959 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:16.959 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.900 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.159 00:18:18.159 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.159 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.159 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.420 { 00:18:18.420 "cntlid": 125, 00:18:18.420 "qid": 0, 00:18:18.420 "state": "enabled", 00:18:18.420 "thread": "nvmf_tgt_poll_group_000", 00:18:18.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.420 "listen_address": { 00:18:18.420 "trtype": "TCP", 00:18:18.420 "adrfam": "IPv4", 00:18:18.420 "traddr": "10.0.0.2", 00:18:18.420 "trsvcid": "4420" 00:18:18.420 }, 00:18:18.420 "peer_address": { 00:18:18.420 "trtype": "TCP", 00:18:18.420 "adrfam": "IPv4", 00:18:18.420 "traddr": "10.0.0.1", 00:18:18.420 "trsvcid": "38810" 00:18:18.420 }, 00:18:18.420 "auth": { 00:18:18.420 "state": "completed", 00:18:18.420 "digest": "sha512", 00:18:18.420 "dhgroup": "ffdhe4096" 00:18:18.420 } 00:18:18.420 } 00:18:18.420 ]' 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.420 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.679 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:18.679 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.249 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.510 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.770 00:18:19.770 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.770 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.770 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.031 { 00:18:20.031 "cntlid": 127, 00:18:20.031 "qid": 0, 00:18:20.031 "state": "enabled", 00:18:20.031 "thread": "nvmf_tgt_poll_group_000", 00:18:20.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.031 "listen_address": { 00:18:20.031 "trtype": "TCP", 00:18:20.031 "adrfam": "IPv4", 00:18:20.031 "traddr": "10.0.0.2", 00:18:20.031 "trsvcid": "4420" 00:18:20.031 }, 00:18:20.031 "peer_address": { 00:18:20.031 "trtype": "TCP", 00:18:20.031 "adrfam": "IPv4", 00:18:20.031 "traddr": "10.0.0.1", 00:18:20.031 "trsvcid": "38834" 00:18:20.031 }, 00:18:20.031 "auth": { 00:18:20.031 "state": "completed", 00:18:20.031 "digest": "sha512", 00:18:20.031 "dhgroup": "ffdhe4096" 00:18:20.031 } 00:18:20.031 } 00:18:20.031 ]' 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.031 19:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.291 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:20.291 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.879 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.138 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.139 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.139 19:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.398 00:18:21.398 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.398 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.398 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.658 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.658 { 00:18:21.658 "cntlid": 129, 00:18:21.658 "qid": 0, 00:18:21.658 "state": "enabled", 00:18:21.658 "thread": "nvmf_tgt_poll_group_000", 00:18:21.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.658 "listen_address": { 00:18:21.658 "trtype": "TCP", 00:18:21.658 "adrfam": "IPv4", 00:18:21.658 "traddr": "10.0.0.2", 00:18:21.658 "trsvcid": "4420" 00:18:21.658 }, 00:18:21.658 "peer_address": { 00:18:21.658 "trtype": "TCP", 00:18:21.658 "adrfam": "IPv4", 00:18:21.658 "traddr": "10.0.0.1", 00:18:21.658 "trsvcid": "38860" 00:18:21.658 }, 00:18:21.658 "auth": { 00:18:21.658 "state": "completed", 00:18:21.658 "digest": "sha512", 00:18:21.659 "dhgroup": "ffdhe6144" 00:18:21.659 } 00:18:21.659 } 00:18:21.659 ]' 00:18:21.659 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.659 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.659 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:21.919 19:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.858 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.118 00:18:23.118 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.118 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.118 19:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.378 { 00:18:23.378 "cntlid": 131, 00:18:23.378 "qid": 0, 00:18:23.378 "state": "enabled", 00:18:23.378 "thread": "nvmf_tgt_poll_group_000", 00:18:23.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.378 "listen_address": { 00:18:23.378 "trtype": "TCP", 00:18:23.378 "adrfam": "IPv4", 00:18:23.378 "traddr": "10.0.0.2", 00:18:23.378 "trsvcid": "4420" 00:18:23.378 }, 00:18:23.378 "peer_address": { 00:18:23.378 "trtype": "TCP", 00:18:23.378 "adrfam": "IPv4", 00:18:23.378 "traddr": "10.0.0.1", 00:18:23.378 "trsvcid": "38892" 00:18:23.378 }, 00:18:23.378 "auth": { 00:18:23.378 "state": "completed", 00:18:23.378 "digest": "sha512", 00:18:23.378 "dhgroup": "ffdhe6144" 00:18:23.378 } 00:18:23.378 } 00:18:23.378 ]' 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.378 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.638 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.638 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.638 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.638 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:23.638 19:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.583 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.844 00:18:24.844 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.844 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.844 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.104 { 00:18:25.104 "cntlid": 133, 00:18:25.104 "qid": 0, 00:18:25.104 "state": "enabled", 00:18:25.104 "thread": "nvmf_tgt_poll_group_000", 00:18:25.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.104 "listen_address": { 00:18:25.104 "trtype": "TCP", 00:18:25.104 "adrfam": "IPv4", 00:18:25.104 "traddr": "10.0.0.2", 00:18:25.104 "trsvcid": "4420" 00:18:25.104 }, 00:18:25.104 "peer_address": { 00:18:25.104 "trtype": "TCP", 00:18:25.104 "adrfam": "IPv4", 00:18:25.104 "traddr": "10.0.0.1", 00:18:25.104 "trsvcid": "55048" 00:18:25.104 }, 00:18:25.104 "auth": { 00:18:25.104 "state": "completed", 00:18:25.104 "digest": "sha512", 00:18:25.104 "dhgroup": "ffdhe6144" 00:18:25.104 } 00:18:25.104 } 00:18:25.104 ]' 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.104 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.366 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.366 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.366 19:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.366 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:25.366 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.311 19:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.573 00:18:26.573 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.573 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.573 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.834 { 00:18:26.834 "cntlid": 135, 00:18:26.834 "qid": 0, 00:18:26.834 "state": "enabled", 00:18:26.834 "thread": "nvmf_tgt_poll_group_000", 00:18:26.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.834 "listen_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.2", 00:18:26.834 "trsvcid": "4420" 00:18:26.834 }, 00:18:26.834 "peer_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.1", 00:18:26.834 "trsvcid": "55084" 00:18:26.834 }, 00:18:26.834 "auth": { 00:18:26.834 "state": "completed", 00:18:26.834 "digest": "sha512", 00:18:26.834 "dhgroup": "ffdhe6144" 00:18:26.834 } 00:18:26.834 } 00:18:26.834 ]' 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.834 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.095 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:27.095 19:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.666 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.926 19:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.499 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.499 { 00:18:28.499 "cntlid": 137, 00:18:28.499 "qid": 0, 00:18:28.499 "state": "enabled", 00:18:28.499 "thread": "nvmf_tgt_poll_group_000", 00:18:28.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.499 "listen_address": { 00:18:28.499 "trtype": "TCP", 00:18:28.499 "adrfam": "IPv4", 00:18:28.499 "traddr": "10.0.0.2", 00:18:28.499 "trsvcid": "4420" 00:18:28.499 }, 00:18:28.499 "peer_address": { 00:18:28.499 "trtype": "TCP", 00:18:28.499 "adrfam": "IPv4", 00:18:28.499 "traddr": "10.0.0.1", 00:18:28.499 "trsvcid": "55102" 00:18:28.499 }, 00:18:28.499 "auth": { 00:18:28.499 "state": "completed", 00:18:28.499 "digest": "sha512", 00:18:28.499 "dhgroup": "ffdhe8192" 00:18:28.499 } 00:18:28.499 } 00:18:28.499 ]' 00:18:28.499 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.773 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.075 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:29.075 19:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.677 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.248 00:18:30.248 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.248 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.248 19:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.509 { 00:18:30.509 "cntlid": 139, 00:18:30.509 "qid": 0, 00:18:30.509 "state": "enabled", 00:18:30.509 "thread": "nvmf_tgt_poll_group_000", 00:18:30.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.509 "listen_address": { 00:18:30.509 "trtype": "TCP", 00:18:30.509 "adrfam": "IPv4", 00:18:30.509 "traddr": "10.0.0.2", 00:18:30.509 "trsvcid": "4420" 00:18:30.509 }, 00:18:30.509 "peer_address": { 00:18:30.509 "trtype": "TCP", 00:18:30.509 "adrfam": "IPv4", 00:18:30.509 "traddr": "10.0.0.1", 00:18:30.509 "trsvcid": "55108" 00:18:30.509 }, 00:18:30.509 "auth": { 00:18:30.509 "state": "completed", 00:18:30.509 "digest": "sha512", 00:18:30.509 "dhgroup": "ffdhe8192" 00:18:30.509 } 00:18:30.509 } 00:18:30.509 ]' 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.509 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.770 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:30.770 19:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: --dhchap-ctrl-secret DHHC-1:02:ZDlmM2M1ZTk2ZWY0MjI2MGYzYzRhZTMzNjQzNWNkYzNiNjQ4YjgyMTE3ZWE1NDk0iqVwxQ==: 00:18:31.342 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.342 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.342 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.342 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.602 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.175 00:18:32.175 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.175 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.175 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.437 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.437 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.437 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.437 19:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.437 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.437 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.437 { 00:18:32.437 "cntlid": 141, 00:18:32.437 "qid": 0, 00:18:32.437 "state": "enabled", 00:18:32.437 "thread": "nvmf_tgt_poll_group_000", 00:18:32.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.437 "listen_address": { 00:18:32.437 "trtype": "TCP", 00:18:32.437 "adrfam": "IPv4", 00:18:32.437 "traddr": "10.0.0.2", 00:18:32.437 "trsvcid": "4420" 00:18:32.437 }, 00:18:32.437 "peer_address": { 00:18:32.437 "trtype": "TCP", 00:18:32.437 "adrfam": "IPv4", 00:18:32.437 "traddr": "10.0.0.1", 00:18:32.437 "trsvcid": "55130" 00:18:32.437 }, 00:18:32.437 "auth": { 00:18:32.437 "state": "completed", 00:18:32.437 "digest": "sha512", 00:18:32.437 "dhgroup": "ffdhe8192" 00:18:32.437 } 00:18:32.437 } 00:18:32.437 ]' 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.438 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.699 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:32.699 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:01:NjkzYzZjZTZmNWRjMTI2NzA4MzVmZmYwOWIyMTA4MTdoCUq+: 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.271 19:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.532 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.793 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.054 { 00:18:34.054 "cntlid": 143, 00:18:34.054 "qid": 0, 00:18:34.054 "state": "enabled", 00:18:34.054 "thread": "nvmf_tgt_poll_group_000", 00:18:34.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.054 "listen_address": { 00:18:34.054 "trtype": "TCP", 00:18:34.054 "adrfam": "IPv4", 00:18:34.054 "traddr": "10.0.0.2", 00:18:34.054 "trsvcid": "4420" 00:18:34.054 }, 00:18:34.054 "peer_address": { 00:18:34.054 "trtype": "TCP", 00:18:34.054 "adrfam": "IPv4", 00:18:34.054 "traddr": "10.0.0.1", 00:18:34.054 "trsvcid": "55158" 00:18:34.054 }, 00:18:34.054 "auth": { 00:18:34.054 "state": "completed", 00:18:34.054 "digest": "sha512", 00:18:34.054 "dhgroup": "ffdhe8192" 00:18:34.054 } 00:18:34.054 } 00:18:34.054 ]' 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.054 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.314 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.314 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.314 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.314 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.314 19:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.314 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:34.314 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.255 19:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.825 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.825 { 00:18:35.825 "cntlid": 145, 00:18:35.825 "qid": 0, 00:18:35.825 "state": "enabled", 00:18:35.825 "thread": "nvmf_tgt_poll_group_000", 00:18:35.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.825 "listen_address": { 00:18:35.825 "trtype": "TCP", 00:18:35.825 "adrfam": "IPv4", 00:18:35.825 "traddr": "10.0.0.2", 00:18:35.825 "trsvcid": "4420" 00:18:35.825 }, 00:18:35.825 "peer_address": { 00:18:35.825 "trtype": "TCP", 00:18:35.825 "adrfam": "IPv4", 00:18:35.825 "traddr": "10.0.0.1", 00:18:35.825 "trsvcid": "36236" 00:18:35.825 }, 00:18:35.825 "auth": { 00:18:35.825 "state": "completed", 00:18:35.825 "digest": "sha512", 00:18:35.825 "dhgroup": "ffdhe8192" 00:18:35.825 } 00:18:35.825 } 00:18:35.825 ]' 00:18:35.825 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.086 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.346 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:36.346 19:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MTg5YTAzZWVkODI0NzMxZGNkOTAzYWFmZjA0NjFjZTZjOGI1N2VlODExZGU0ODY44kkkww==: --dhchap-ctrl-secret DHHC-1:03:NTVlMmViNWM1NThlYThjZjAzODM1YWZiNGNlNjRkZDRmN2QzM2U1NjM2YzczYjI5NmJkMDdlYWRmMTI4MWU5YSYpt5Q=: 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:36.916 19:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:37.487 request: 00:18:37.487 { 00:18:37.487 "name": "nvme0", 00:18:37.487 "trtype": "tcp", 00:18:37.487 "traddr": "10.0.0.2", 00:18:37.487 "adrfam": "ipv4", 00:18:37.487 "trsvcid": "4420", 00:18:37.487 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.487 "prchk_reftag": false, 00:18:37.487 "prchk_guard": false, 00:18:37.487 "hdgst": false, 00:18:37.487 "ddgst": false, 00:18:37.487 "dhchap_key": "key2", 00:18:37.487 "allow_unrecognized_csi": false, 00:18:37.487 "method": "bdev_nvme_attach_controller", 00:18:37.487 "req_id": 1 00:18:37.487 } 00:18:37.487 Got JSON-RPC error response 00:18:37.487 response: 00:18:37.487 { 00:18:37.487 "code": -5, 00:18:37.487 "message": "Input/output error" 00:18:37.487 } 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.487 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.748 request: 00:18:37.748 { 00:18:37.748 "name": "nvme0", 00:18:37.748 "trtype": "tcp", 00:18:37.748 "traddr": "10.0.0.2", 00:18:37.748 "adrfam": "ipv4", 00:18:37.748 "trsvcid": "4420", 00:18:37.748 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.748 "prchk_reftag": false, 00:18:37.748 "prchk_guard": false, 00:18:37.748 "hdgst": false, 00:18:37.748 "ddgst": false, 00:18:37.748 "dhchap_key": "key1", 00:18:37.748 "dhchap_ctrlr_key": "ckey2", 00:18:37.748 "allow_unrecognized_csi": false, 00:18:37.748 "method": "bdev_nvme_attach_controller", 00:18:37.748 "req_id": 1 00:18:37.748 } 00:18:37.748 Got JSON-RPC error response 00:18:37.748 response: 00:18:37.748 { 00:18:37.748 "code": -5, 00:18:37.748 "message": "Input/output error" 00:18:37.748 } 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.748 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.749 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.319 request: 00:18:38.319 { 00:18:38.319 "name": "nvme0", 00:18:38.319 "trtype": "tcp", 00:18:38.319 "traddr": "10.0.0.2", 00:18:38.319 "adrfam": "ipv4", 00:18:38.319 "trsvcid": "4420", 00:18:38.319 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.319 "prchk_reftag": false, 00:18:38.319 "prchk_guard": false, 00:18:38.319 "hdgst": false, 00:18:38.319 "ddgst": false, 00:18:38.319 "dhchap_key": "key1", 00:18:38.319 "dhchap_ctrlr_key": "ckey1", 00:18:38.319 "allow_unrecognized_csi": false, 00:18:38.319 "method": "bdev_nvme_attach_controller", 00:18:38.319 "req_id": 1 00:18:38.319 } 00:18:38.319 Got JSON-RPC error response 00:18:38.319 response: 00:18:38.319 { 00:18:38.319 "code": -5, 00:18:38.319 "message": "Input/output error" 00:18:38.319 } 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.319 19:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3629021 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3629021 ']' 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3629021 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629021 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629021' 00:18:38.319 killing process with pid 3629021 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3629021 00:18:38.319 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3629021 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3654875 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3654875 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3654875 ']' 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.583 19:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3654875 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3654875 ']' 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.526 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.527 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.527 null0 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.57a 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.09u ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.09u 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Vvj 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iS1 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iS1 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.SUx 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.FlH ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FlH 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.laA 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.788 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.789 19:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.362 nvme0n1 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.623 { 00:18:40.623 "cntlid": 1, 00:18:40.623 "qid": 0, 00:18:40.623 "state": "enabled", 00:18:40.623 "thread": "nvmf_tgt_poll_group_000", 00:18:40.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.623 "listen_address": { 00:18:40.623 "trtype": "TCP", 00:18:40.623 "adrfam": "IPv4", 00:18:40.623 "traddr": "10.0.0.2", 00:18:40.623 "trsvcid": "4420" 00:18:40.623 }, 00:18:40.623 "peer_address": { 00:18:40.623 "trtype": "TCP", 00:18:40.623 "adrfam": "IPv4", 00:18:40.623 "traddr": "10.0.0.1", 00:18:40.623 "trsvcid": "36292" 00:18:40.623 }, 00:18:40.623 "auth": { 00:18:40.623 "state": "completed", 00:18:40.623 "digest": "sha512", 00:18:40.623 "dhgroup": "ffdhe8192" 00:18:40.623 } 00:18:40.623 } 00:18:40.623 ]' 00:18:40.623 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.884 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.146 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:41.146 19:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.719 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.720 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.720 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:41.720 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.980 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.980 request: 00:18:41.980 { 00:18:41.980 "name": "nvme0", 00:18:41.980 "trtype": "tcp", 00:18:41.980 "traddr": "10.0.0.2", 00:18:41.980 "adrfam": "ipv4", 00:18:41.980 "trsvcid": "4420", 00:18:41.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.981 "prchk_reftag": false, 00:18:41.981 "prchk_guard": false, 00:18:41.981 "hdgst": false, 00:18:41.981 "ddgst": false, 00:18:41.981 "dhchap_key": "key3", 00:18:41.981 "allow_unrecognized_csi": false, 00:18:41.981 "method": "bdev_nvme_attach_controller", 00:18:41.981 "req_id": 1 00:18:41.981 } 00:18:41.981 Got JSON-RPC error response 00:18:41.981 response: 00:18:41.981 { 00:18:41.981 "code": -5, 00:18:41.981 "message": "Input/output error" 00:18:41.981 } 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:41.981 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.241 19:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.502 request: 00:18:42.502 { 00:18:42.502 "name": "nvme0", 00:18:42.502 "trtype": "tcp", 00:18:42.502 "traddr": "10.0.0.2", 00:18:42.502 "adrfam": "ipv4", 00:18:42.502 "trsvcid": "4420", 00:18:42.502 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.502 "prchk_reftag": false, 00:18:42.502 "prchk_guard": false, 00:18:42.502 "hdgst": false, 00:18:42.502 "ddgst": false, 00:18:42.502 "dhchap_key": "key3", 00:18:42.502 "allow_unrecognized_csi": false, 00:18:42.502 "method": "bdev_nvme_attach_controller", 00:18:42.502 "req_id": 1 00:18:42.502 } 00:18:42.502 Got JSON-RPC error response 00:18:42.502 response: 00:18:42.502 { 00:18:42.502 "code": -5, 00:18:42.502 "message": "Input/output error" 00:18:42.502 } 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.502 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.763 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:43.023 request: 00:18:43.023 { 00:18:43.023 "name": "nvme0", 00:18:43.023 "trtype": "tcp", 00:18:43.023 "traddr": "10.0.0.2", 00:18:43.023 "adrfam": "ipv4", 00:18:43.023 "trsvcid": "4420", 00:18:43.023 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:43.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.023 "prchk_reftag": false, 00:18:43.023 "prchk_guard": false, 00:18:43.023 "hdgst": false, 00:18:43.023 "ddgst": false, 00:18:43.023 "dhchap_key": "key0", 00:18:43.023 "dhchap_ctrlr_key": "key1", 00:18:43.023 "allow_unrecognized_csi": false, 00:18:43.023 "method": "bdev_nvme_attach_controller", 00:18:43.023 "req_id": 1 00:18:43.023 } 00:18:43.023 Got JSON-RPC error response 00:18:43.023 response: 00:18:43.023 { 00:18:43.023 "code": -5, 00:18:43.023 "message": "Input/output error" 00:18:43.023 } 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:43.023 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:43.284 nvme0n1 00:18:43.284 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:43.284 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:43.284 19:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.544 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.545 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:43.545 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:43.545 19:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.487 nvme0n1 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:44.487 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.747 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.748 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:44.748 19:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: --dhchap-ctrl-secret DHHC-1:03:NGViZmI5ZjI4NzE2MWRkYzkxMTExYTc5NzI2ODBiMzA0ZDQ4MDZkZTMxYWJmMGE4ZmU1MTJiNjAxMzBlMmYyMf8FcLc=: 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.317 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.578 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:45.579 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.579 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.839 request: 00:18:45.839 { 00:18:45.839 "name": "nvme0", 00:18:45.839 "trtype": "tcp", 00:18:45.839 "traddr": "10.0.0.2", 00:18:45.839 "adrfam": "ipv4", 00:18:45.839 "trsvcid": "4420", 00:18:45.839 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.839 "prchk_reftag": false, 00:18:45.839 "prchk_guard": false, 00:18:45.839 "hdgst": false, 00:18:45.839 "ddgst": false, 00:18:45.839 "dhchap_key": "key1", 00:18:45.839 "allow_unrecognized_csi": false, 00:18:45.839 "method": "bdev_nvme_attach_controller", 00:18:45.839 "req_id": 1 00:18:45.839 } 00:18:45.839 Got JSON-RPC error response 00:18:45.839 response: 00:18:45.839 { 00:18:45.839 "code": -5, 00:18:45.839 "message": "Input/output error" 00:18:45.839 } 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.100 19:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.671 nvme0n1 00:18:46.671 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:46.671 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:46.671 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.931 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.931 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.931 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:47.192 19:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:47.192 nvme0n1 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.453 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: '' 2s 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:47.714 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: ]] 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGUwYWNkNTRmNDY1YjU1MTNlMDc0NWRhNjhiMjMyZjRXTMMP: 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:47.715 19:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:49.626 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:49.626 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:49.626 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: 2s 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: ]] 00:18:49.627 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmFiZGFmNDc5NjIxODM4NTUzMTFmNzgwOWRkZWE1YTdmMWZmYjQzMmMwMWJhMTE5+3FmZw==: 00:18:49.887 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:49.887 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.800 19:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.742 nvme0n1 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.742 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:53.002 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:53.002 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:53.002 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:53.262 19:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.522 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.782 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:54.043 request: 00:18:54.043 { 00:18:54.043 "name": "nvme0", 00:18:54.043 "dhchap_key": "key1", 00:18:54.043 "dhchap_ctrlr_key": "key3", 00:18:54.043 "method": "bdev_nvme_set_keys", 00:18:54.043 "req_id": 1 00:18:54.043 } 00:18:54.043 Got JSON-RPC error response 00:18:54.043 response: 00:18:54.043 { 00:18:54.043 "code": -13, 00:18:54.043 "message": "Permission denied" 00:18:54.043 } 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:54.043 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.304 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:54.304 19:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:55.246 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:55.246 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:55.246 19:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.507 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:55.507 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.507 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.507 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.507 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.508 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.508 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.508 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:56.449 nvme0n1 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:56.449 19:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:56.710 request: 00:18:56.710 { 00:18:56.710 "name": "nvme0", 00:18:56.710 "dhchap_key": "key2", 00:18:56.710 "dhchap_ctrlr_key": "key0", 00:18:56.710 "method": "bdev_nvme_set_keys", 00:18:56.710 "req_id": 1 00:18:56.710 } 00:18:56.710 Got JSON-RPC error response 00:18:56.710 response: 00:18:56.710 { 00:18:56.710 "code": -13, 00:18:56.710 "message": "Permission denied" 00:18:56.710 } 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:56.710 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.970 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:56.970 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:57.911 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:57.912 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:57.912 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3629365 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3629365 ']' 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3629365 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3629365 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3629365' 00:18:58.172 killing process with pid 3629365 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3629365 00:18:58.172 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3629365 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.432 rmmod nvme_tcp 00:18:58.432 rmmod nvme_fabrics 00:18:58.432 rmmod nvme_keyring 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3654875 ']' 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3654875 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3654875 ']' 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3654875 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:58.432 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3654875 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3654875' 00:18:58.433 killing process with pid 3654875 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3654875 00:18:58.433 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3654875 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.693 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.57a /tmp/spdk.key-sha256.Vvj /tmp/spdk.key-sha384.SUx /tmp/spdk.key-sha512.laA /tmp/spdk.key-sha512.09u /tmp/spdk.key-sha384.iS1 /tmp/spdk.key-sha256.FlH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:00.620 00:19:00.620 real 2m36.990s 00:19:00.620 user 5m53.029s 00:19:00.620 sys 0m24.786s 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.620 ************************************ 00:19:00.620 END TEST nvmf_auth_target 00:19:00.620 ************************************ 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.620 ************************************ 00:19:00.620 START TEST nvmf_bdevio_no_huge 00:19:00.620 ************************************ 00:19:00.620 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:00.900 * Looking for test storage... 00:19:00.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.900 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.901 --rc genhtml_branch_coverage=1 00:19:00.901 --rc genhtml_function_coverage=1 00:19:00.901 --rc genhtml_legend=1 00:19:00.901 --rc geninfo_all_blocks=1 00:19:00.901 --rc geninfo_unexecuted_blocks=1 00:19:00.901 00:19:00.901 ' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.901 --rc genhtml_branch_coverage=1 00:19:00.901 --rc genhtml_function_coverage=1 00:19:00.901 --rc genhtml_legend=1 00:19:00.901 --rc geninfo_all_blocks=1 00:19:00.901 --rc geninfo_unexecuted_blocks=1 00:19:00.901 00:19:00.901 ' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.901 --rc genhtml_branch_coverage=1 00:19:00.901 --rc genhtml_function_coverage=1 00:19:00.901 --rc genhtml_legend=1 00:19:00.901 --rc geninfo_all_blocks=1 00:19:00.901 --rc geninfo_unexecuted_blocks=1 00:19:00.901 00:19:00.901 ' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.901 --rc genhtml_branch_coverage=1 00:19:00.901 --rc genhtml_function_coverage=1 00:19:00.901 --rc genhtml_legend=1 00:19:00.901 --rc geninfo_all_blocks=1 00:19:00.901 --rc geninfo_unexecuted_blocks=1 00:19:00.901 00:19:00.901 ' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:00.901 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.902 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:09.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:09.048 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:09.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:09.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:09.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.049 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:09.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:19:09.049 00:19:09.049 --- 10.0.0.2 ping statistics --- 00:19:09.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.049 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:19:09.049 00:19:09.049 --- 10.0.0.1 ping statistics --- 00:19:09.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.049 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.049 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3663259 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3663259 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3663259 ']' 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.050 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.050 [2024-11-26 19:57:09.300681] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:09.050 [2024-11-26 19:57:09.300757] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:09.050 [2024-11-26 19:57:09.412241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.050 [2024-11-26 19:57:09.473304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.050 [2024-11-26 19:57:09.473351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.050 [2024-11-26 19:57:09.473360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.050 [2024-11-26 19:57:09.473367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.050 [2024-11-26 19:57:09.473373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.050 [2024-11-26 19:57:09.474937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.050 [2024-11-26 19:57:09.475096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:09.050 [2024-11-26 19:57:09.475266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:09.050 [2024-11-26 19:57:09.475458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.310 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.311 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:09.311 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.311 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.311 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.571 [2024-11-26 19:57:10.172823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.571 Malloc0 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.571 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.572 [2024-11-26 19:57:10.226857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.572 { 00:19:09.572 "params": { 00:19:09.572 "name": "Nvme$subsystem", 00:19:09.572 "trtype": "$TEST_TRANSPORT", 00:19:09.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.572 "adrfam": "ipv4", 00:19:09.572 "trsvcid": "$NVMF_PORT", 00:19:09.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.572 "hdgst": ${hdgst:-false}, 00:19:09.572 "ddgst": ${ddgst:-false} 00:19:09.572 }, 00:19:09.572 "method": "bdev_nvme_attach_controller" 00:19:09.572 } 00:19:09.572 EOF 00:19:09.572 )") 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:09.572 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:09.572 "params": { 00:19:09.572 "name": "Nvme1", 00:19:09.572 "trtype": "tcp", 00:19:09.572 "traddr": "10.0.0.2", 00:19:09.572 "adrfam": "ipv4", 00:19:09.572 "trsvcid": "4420", 00:19:09.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.572 "hdgst": false, 00:19:09.572 "ddgst": false 00:19:09.572 }, 00:19:09.572 "method": "bdev_nvme_attach_controller" 00:19:09.572 }' 00:19:09.572 [2024-11-26 19:57:10.285312] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:09.572 [2024-11-26 19:57:10.285389] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3663630 ] 00:19:09.572 [2024-11-26 19:57:10.385133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.834 [2024-11-26 19:57:10.445743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.834 [2024-11-26 19:57:10.445903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.834 [2024-11-26 19:57:10.445903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.096 I/O targets: 00:19:10.096 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:10.096 00:19:10.096 00:19:10.096 CUnit - A unit testing framework for C - Version 2.1-3 00:19:10.096 http://cunit.sourceforge.net/ 00:19:10.096 00:19:10.096 00:19:10.096 Suite: bdevio tests on: Nvme1n1 00:19:10.096 Test: blockdev write read block ...passed 00:19:10.096 Test: blockdev write zeroes read block ...passed 00:19:10.096 Test: blockdev write zeroes read no split ...passed 00:19:10.096 Test: blockdev write zeroes read split ...passed 00:19:10.358 Test: blockdev write zeroes read split partial ...passed 00:19:10.358 Test: blockdev reset ...[2024-11-26 19:57:10.972802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:10.358 [2024-11-26 19:57:10.972900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f64810 (9): Bad file descriptor 00:19:10.358 [2024-11-26 19:57:11.117097] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:10.358 passed 00:19:10.358 Test: blockdev write read 8 blocks ...passed 00:19:10.358 Test: blockdev write read size > 128k ...passed 00:19:10.358 Test: blockdev write read invalid size ...passed 00:19:10.358 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:10.358 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:10.358 Test: blockdev write read max offset ...passed 00:19:10.620 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:10.620 Test: blockdev writev readv 8 blocks ...passed 00:19:10.620 Test: blockdev writev readv 30 x 1block ...passed 00:19:10.620 Test: blockdev writev readv block ...passed 00:19:10.620 Test: blockdev writev readv size > 128k ...passed 00:19:10.620 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:10.620 Test: blockdev comparev and writev ...[2024-11-26 19:57:11.302489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.302537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.302553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.302562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.303126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.303169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.303737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.303750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.303764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.303773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.304285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.304296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.304310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.620 [2024-11-26 19:57:11.304318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.620 passed 00:19:10.620 Test: blockdev nvme passthru rw ...passed 00:19:10.620 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:57:11.389041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.620 [2024-11-26 19:57:11.389056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.389457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.620 [2024-11-26 19:57:11.389468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.389848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.620 [2024-11-26 19:57:11.389870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.620 [2024-11-26 19:57:11.390255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.620 [2024-11-26 19:57:11.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.620 passed 00:19:10.620 Test: blockdev nvme admin passthru ...passed 00:19:10.881 Test: blockdev copy ...passed 00:19:10.881 00:19:10.881 Run Summary: Type Total Ran Passed Failed Inactive 00:19:10.881 suites 1 1 n/a 0 0 00:19:10.881 tests 23 23 23 0 0 00:19:10.881 asserts 152 152 152 0 n/a 00:19:10.881 00:19:10.881 Elapsed time = 1.404 seconds 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.143 rmmod nvme_tcp 00:19:11.143 rmmod nvme_fabrics 00:19:11.143 rmmod nvme_keyring 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3663259 ']' 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3663259 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3663259 ']' 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3663259 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3663259 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3663259' 00:19:11.143 killing process with pid 3663259 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3663259 00:19:11.143 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3663259 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.714 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.629 00:19:13.629 real 0m12.885s 00:19:13.629 user 0m15.594s 00:19:13.629 sys 0m6.884s 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.629 ************************************ 00:19:13.629 END TEST nvmf_bdevio_no_huge 00:19:13.629 ************************************ 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.629 ************************************ 00:19:13.629 START TEST nvmf_tls 00:19:13.629 ************************************ 00:19:13.629 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.891 * Looking for test storage... 00:19:13.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.891 --rc genhtml_branch_coverage=1 00:19:13.891 --rc genhtml_function_coverage=1 00:19:13.891 --rc genhtml_legend=1 00:19:13.891 --rc geninfo_all_blocks=1 00:19:13.891 --rc geninfo_unexecuted_blocks=1 00:19:13.891 00:19:13.891 ' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.891 --rc genhtml_branch_coverage=1 00:19:13.891 --rc genhtml_function_coverage=1 00:19:13.891 --rc genhtml_legend=1 00:19:13.891 --rc geninfo_all_blocks=1 00:19:13.891 --rc geninfo_unexecuted_blocks=1 00:19:13.891 00:19:13.891 ' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.891 --rc genhtml_branch_coverage=1 00:19:13.891 --rc genhtml_function_coverage=1 00:19:13.891 --rc genhtml_legend=1 00:19:13.891 --rc geninfo_all_blocks=1 00:19:13.891 --rc geninfo_unexecuted_blocks=1 00:19:13.891 00:19:13.891 ' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.891 --rc genhtml_branch_coverage=1 00:19:13.891 --rc genhtml_function_coverage=1 00:19:13.891 --rc genhtml_legend=1 00:19:13.891 --rc geninfo_all_blocks=1 00:19:13.891 --rc geninfo_unexecuted_blocks=1 00:19:13.891 00:19:13.891 ' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.891 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.892 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:22.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:22.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:22.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:22.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.189 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.190 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:22.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:19:22.190 00:19:22.190 --- 10.0.0.2 ping statistics --- 00:19:22.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.190 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:19:22.190 00:19:22.190 --- 10.0.0.1 ping statistics --- 00:19:22.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.190 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3668524 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3668524 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3668524 ']' 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.190 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.190 [2024-11-26 19:57:22.247338] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:22.190 [2024-11-26 19:57:22.247403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.190 [2024-11-26 19:57:22.347776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.190 [2024-11-26 19:57:22.398092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.190 [2024-11-26 19:57:22.398136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.190 [2024-11-26 19:57:22.398144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.190 [2024-11-26 19:57:22.398152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.190 [2024-11-26 19:57:22.398169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.190 [2024-11-26 19:57:22.398932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:22.451 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:22.712 true 00:19:22.712 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.712 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:22.712 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:22.712 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:22.712 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:22.974 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.974 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:23.235 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:23.235 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:23.235 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:23.235 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.235 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:23.497 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:23.497 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:23.497 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.497 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:23.758 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:23.758 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:23.758 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:23.758 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.758 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:24.020 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:24.020 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:24.020 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:24.280 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.280 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TxxQW2Q7kP 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Hw6BfNHLdp 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TxxQW2Q7kP 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Hw6BfNHLdp 00:19:24.541 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:24.802 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:24.802 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TxxQW2Q7kP 00:19:24.802 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TxxQW2Q7kP 00:19:24.802 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:25.062 [2024-11-26 19:57:25.758370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.062 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:25.322 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:25.322 [2024-11-26 19:57:26.063107] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.322 [2024-11-26 19:57:26.063310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.322 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.582 malloc0 00:19:25.582 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.842 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TxxQW2Q7kP 00:19:25.842 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.104 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TxxQW2Q7kP 00:19:36.098 Initializing NVMe Controllers 00:19:36.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:36.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:36.098 Initialization complete. Launching workers. 00:19:36.098 ======================================================== 00:19:36.098 Latency(us) 00:19:36.098 Device Information : IOPS MiB/s Average min max 00:19:36.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18746.96 73.23 3414.09 1021.95 3996.19 00:19:36.098 ======================================================== 00:19:36.098 Total : 18746.96 73.23 3414.09 1021.95 3996.19 00:19:36.098 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TxxQW2Q7kP 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TxxQW2Q7kP 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3671264 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3671264 /var/tmp/bdevperf.sock 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3671264 ']' 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.098 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.357 [2024-11-26 19:57:36.931349] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:36.357 [2024-11-26 19:57:36.931404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671264 ] 00:19:36.357 [2024-11-26 19:57:37.021138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.357 [2024-11-26 19:57:37.056546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.926 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.926 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.926 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TxxQW2Q7kP 00:19:37.185 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.445 [2024-11-26 19:57:38.033290] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.445 TLSTESTn1 00:19:37.445 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.445 Running I/O for 10 seconds... 00:19:39.767 4497.00 IOPS, 17.57 MiB/s [2024-11-26T18:57:41.529Z] 5395.00 IOPS, 21.07 MiB/s [2024-11-26T18:57:42.472Z] 5227.00 IOPS, 20.42 MiB/s [2024-11-26T18:57:43.414Z] 5149.50 IOPS, 20.12 MiB/s [2024-11-26T18:57:44.354Z] 5268.20 IOPS, 20.58 MiB/s [2024-11-26T18:57:45.297Z] 5460.17 IOPS, 21.33 MiB/s [2024-11-26T18:57:46.237Z] 5424.71 IOPS, 21.19 MiB/s [2024-11-26T18:57:47.620Z] 5356.75 IOPS, 20.92 MiB/s [2024-11-26T18:57:48.563Z] 5306.22 IOPS, 20.73 MiB/s [2024-11-26T18:57:48.563Z] 5321.00 IOPS, 20.79 MiB/s 00:19:47.742 Latency(us) 00:19:47.742 [2024-11-26T18:57:48.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.742 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.742 Verification LBA range: start 0x0 length 0x2000 00:19:47.742 TLSTESTn1 : 10.01 5326.94 20.81 0.00 0.00 23993.42 5079.04 37137.07 00:19:47.742 [2024-11-26T18:57:48.563Z] =================================================================================================================== 00:19:47.742 [2024-11-26T18:57:48.563Z] Total : 5326.94 20.81 0.00 0.00 23993.42 5079.04 37137.07 00:19:47.742 { 00:19:47.742 "results": [ 00:19:47.742 { 00:19:47.742 "job": "TLSTESTn1", 00:19:47.742 "core_mask": "0x4", 00:19:47.742 "workload": "verify", 00:19:47.742 "status": "finished", 00:19:47.742 "verify_range": { 00:19:47.742 "start": 0, 00:19:47.742 "length": 8192 00:19:47.742 }, 00:19:47.742 "queue_depth": 128, 00:19:47.742 "io_size": 4096, 00:19:47.742 "runtime": 10.012509, 00:19:47.742 "iops": 5326.936535088258, 00:19:47.742 "mibps": 20.808345840188508, 00:19:47.742 "io_failed": 0, 00:19:47.742 "io_timeout": 0, 00:19:47.742 "avg_latency_us": 23993.41562521874, 00:19:47.742 "min_latency_us": 5079.04, 00:19:47.742 "max_latency_us": 37137.066666666666 00:19:47.742 } 00:19:47.742 ], 00:19:47.742 "core_count": 1 00:19:47.742 } 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3671264 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3671264 ']' 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3671264 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671264 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671264' 00:19:47.742 killing process with pid 3671264 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3671264 00:19:47.742 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.742 00:19:47.742 Latency(us) 00:19:47.742 [2024-11-26T18:57:48.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.742 [2024-11-26T18:57:48.563Z] =================================================================================================================== 00:19:47.742 [2024-11-26T18:57:48.563Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3671264 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hw6BfNHLdp 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hw6BfNHLdp 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hw6BfNHLdp 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hw6BfNHLdp 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3673603 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3673603 /var/tmp/bdevperf.sock 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3673603 ']' 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.742 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.742 [2024-11-26 19:57:48.496519] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:47.742 [2024-11-26 19:57:48.496576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673603 ] 00:19:48.002 [2024-11-26 19:57:48.577804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.002 [2024-11-26 19:57:48.606520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.571 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.571 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.571 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hw6BfNHLdp 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.832 [2024-11-26 19:57:49.598372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.832 [2024-11-26 19:57:49.610069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.832 [2024-11-26 19:57:49.610464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2233be0 (107): Transport endpoint is not connected 00:19:48.832 [2024-11-26 19:57:49.611459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2233be0 (9): Bad file descriptor 00:19:48.832 [2024-11-26 19:57:49.612461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:48.832 [2024-11-26 19:57:49.612469] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:48.832 [2024-11-26 19:57:49.612475] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:48.832 [2024-11-26 19:57:49.612481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:48.832 request: 00:19:48.832 { 00:19:48.832 "name": "TLSTEST", 00:19:48.832 "trtype": "tcp", 00:19:48.832 "traddr": "10.0.0.2", 00:19:48.832 "adrfam": "ipv4", 00:19:48.832 "trsvcid": "4420", 00:19:48.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.832 "prchk_reftag": false, 00:19:48.832 "prchk_guard": false, 00:19:48.832 "hdgst": false, 00:19:48.832 "ddgst": false, 00:19:48.832 "psk": "key0", 00:19:48.832 "allow_unrecognized_csi": false, 00:19:48.832 "method": "bdev_nvme_attach_controller", 00:19:48.832 "req_id": 1 00:19:48.832 } 00:19:48.832 Got JSON-RPC error response 00:19:48.832 response: 00:19:48.832 { 00:19:48.832 "code": -5, 00:19:48.832 "message": "Input/output error" 00:19:48.832 } 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3673603 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3673603 ']' 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3673603 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.832 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3673603 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3673603' 00:19:49.093 killing process with pid 3673603 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3673603 00:19:49.093 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.093 00:19:49.093 Latency(us) 00:19:49.093 [2024-11-26T18:57:49.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.093 [2024-11-26T18:57:49.914Z] =================================================================================================================== 00:19:49.093 [2024-11-26T18:57:49.914Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3673603 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TxxQW2Q7kP 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TxxQW2Q7kP 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TxxQW2Q7kP 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TxxQW2Q7kP 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3673850 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3673850 /var/tmp/bdevperf.sock 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3673850 ']' 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.093 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.093 [2024-11-26 19:57:49.845504] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:49.093 [2024-11-26 19:57:49.845560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673850 ] 00:19:49.353 [2024-11-26 19:57:49.931308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.353 [2024-11-26 19:57:49.960255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.923 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.923 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.923 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TxxQW2Q7kP 00:19:50.183 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:50.183 [2024-11-26 19:57:50.981264] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.183 [2024-11-26 19:57:50.985848] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.183 [2024-11-26 19:57:50.985868] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.183 [2024-11-26 19:57:50.985887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.183 [2024-11-26 19:57:50.986536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd48be0 (107): Transport endpoint is not connected 00:19:50.183 [2024-11-26 19:57:50.987531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd48be0 (9): Bad file descriptor 00:19:50.183 [2024-11-26 19:57:50.988533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:50.183 [2024-11-26 19:57:50.988540] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.183 [2024-11-26 19:57:50.988546] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:50.183 [2024-11-26 19:57:50.988553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:50.183 request: 00:19:50.183 { 00:19:50.183 "name": "TLSTEST", 00:19:50.183 "trtype": "tcp", 00:19:50.183 "traddr": "10.0.0.2", 00:19:50.183 "adrfam": "ipv4", 00:19:50.183 "trsvcid": "4420", 00:19:50.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.183 "prchk_reftag": false, 00:19:50.183 "prchk_guard": false, 00:19:50.183 "hdgst": false, 00:19:50.183 "ddgst": false, 00:19:50.183 "psk": "key0", 00:19:50.183 "allow_unrecognized_csi": false, 00:19:50.183 "method": "bdev_nvme_attach_controller", 00:19:50.183 "req_id": 1 00:19:50.183 } 00:19:50.183 Got JSON-RPC error response 00:19:50.183 response: 00:19:50.183 { 00:19:50.183 "code": -5, 00:19:50.183 "message": "Input/output error" 00:19:50.183 } 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3673850 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3673850 ']' 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3673850 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3673850 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3673850' 00:19:50.445 killing process with pid 3673850 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3673850 00:19:50.445 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.445 00:19:50.445 Latency(us) 00:19:50.445 [2024-11-26T18:57:51.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.445 [2024-11-26T18:57:51.266Z] =================================================================================================================== 00:19:50.445 [2024-11-26T18:57:51.266Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3673850 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TxxQW2Q7kP 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TxxQW2Q7kP 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TxxQW2Q7kP 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TxxQW2Q7kP 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3674038 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3674038 /var/tmp/bdevperf.sock 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3674038 ']' 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.445 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.445 [2024-11-26 19:57:51.233819] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:50.445 [2024-11-26 19:57:51.233875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674038 ] 00:19:50.706 [2024-11-26 19:57:51.316434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.706 [2024-11-26 19:57:51.344742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.276 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.276 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.276 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TxxQW2Q7kP 00:19:51.538 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.538 [2024-11-26 19:57:52.352773] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.798 [2024-11-26 19:57:52.357235] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.798 [2024-11-26 19:57:52.357253] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.798 [2024-11-26 19:57:52.357271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.798 [2024-11-26 19:57:52.357936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186cbe0 (107): Transport endpoint is not connected 00:19:51.798 [2024-11-26 19:57:52.358931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186cbe0 (9): Bad file descriptor 00:19:51.798 [2024-11-26 19:57:52.359933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:51.798 [2024-11-26 19:57:52.359940] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.798 [2024-11-26 19:57:52.359946] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:51.798 [2024-11-26 19:57:52.359952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:51.798 request: 00:19:51.798 { 00:19:51.798 "name": "TLSTEST", 00:19:51.798 "trtype": "tcp", 00:19:51.798 "traddr": "10.0.0.2", 00:19:51.798 "adrfam": "ipv4", 00:19:51.798 "trsvcid": "4420", 00:19:51.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.798 "prchk_reftag": false, 00:19:51.798 "prchk_guard": false, 00:19:51.798 "hdgst": false, 00:19:51.798 "ddgst": false, 00:19:51.798 "psk": "key0", 00:19:51.798 "allow_unrecognized_csi": false, 00:19:51.798 "method": "bdev_nvme_attach_controller", 00:19:51.798 "req_id": 1 00:19:51.798 } 00:19:51.798 Got JSON-RPC error response 00:19:51.798 response: 00:19:51.798 { 00:19:51.798 "code": -5, 00:19:51.798 "message": "Input/output error" 00:19:51.798 } 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3674038 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3674038 ']' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3674038 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674038 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674038' 00:19:51.798 killing process with pid 3674038 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3674038 00:19:51.798 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.798 00:19:51.798 Latency(us) 00:19:51.798 [2024-11-26T18:57:52.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.798 [2024-11-26T18:57:52.619Z] =================================================================================================================== 00:19:51.798 [2024-11-26T18:57:52.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3674038 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3674316 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3674316 /var/tmp/bdevperf.sock 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3674316 ']' 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.798 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.798 [2024-11-26 19:57:52.573914] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:51.798 [2024-11-26 19:57:52.573957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674316 ] 00:19:52.060 [2024-11-26 19:57:52.648712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.060 [2024-11-26 19:57:52.675757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.060 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.060 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.060 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:52.320 [2024-11-26 19:57:52.897559] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:52.320 [2024-11-26 19:57:52.897588] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:52.320 request: 00:19:52.320 { 00:19:52.320 "name": "key0", 00:19:52.320 "path": "", 00:19:52.320 "method": "keyring_file_add_key", 00:19:52.320 "req_id": 1 00:19:52.320 } 00:19:52.320 Got JSON-RPC error response 00:19:52.320 response: 00:19:52.320 { 00:19:52.320 "code": -1, 00:19:52.320 "message": "Operation not permitted" 00:19:52.320 } 00:19:52.320 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.320 [2024-11-26 19:57:53.082108] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.320 [2024-11-26 19:57:53.082137] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:52.320 request: 00:19:52.320 { 00:19:52.320 "name": "TLSTEST", 00:19:52.320 "trtype": "tcp", 00:19:52.320 "traddr": "10.0.0.2", 00:19:52.320 "adrfam": "ipv4", 00:19:52.320 "trsvcid": "4420", 00:19:52.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.321 "prchk_reftag": false, 00:19:52.321 "prchk_guard": false, 00:19:52.321 "hdgst": false, 00:19:52.321 "ddgst": false, 00:19:52.321 "psk": "key0", 00:19:52.321 "allow_unrecognized_csi": false, 00:19:52.321 "method": "bdev_nvme_attach_controller", 00:19:52.321 "req_id": 1 00:19:52.321 } 00:19:52.321 Got JSON-RPC error response 00:19:52.321 response: 00:19:52.321 { 00:19:52.321 "code": -126, 00:19:52.321 "message": "Required key not available" 00:19:52.321 } 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3674316 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3674316 ']' 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3674316 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.321 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674316 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674316' 00:19:52.581 killing process with pid 3674316 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3674316 00:19:52.581 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.581 00:19:52.581 Latency(us) 00:19:52.581 [2024-11-26T18:57:53.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.581 [2024-11-26T18:57:53.402Z] =================================================================================================================== 00:19:52.581 [2024-11-26T18:57:53.402Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3674316 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3668524 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3668524 ']' 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3668524 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3668524 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3668524' 00:19:52.581 killing process with pid 3668524 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3668524 00:19:52.581 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3668524 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.YQc1xwwcMF 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.YQc1xwwcMF 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3674658 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3674658 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3674658 ']' 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.842 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.842 [2024-11-26 19:57:53.557482] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:52.842 [2024-11-26 19:57:53.557540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.842 [2024-11-26 19:57:53.648106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.103 [2024-11-26 19:57:53.679186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.103 [2024-11-26 19:57:53.679217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.103 [2024-11-26 19:57:53.679222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.103 [2024-11-26 19:57:53.679227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.103 [2024-11-26 19:57:53.679231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.103 [2024-11-26 19:57:53.679719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.673 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:19:53.674 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YQc1xwwcMF 00:19:53.674 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.934 [2024-11-26 19:57:54.553811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.934 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.195 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.195 [2024-11-26 19:57:54.914699] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.195 [2024-11-26 19:57:54.914898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.195 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.455 malloc0 00:19:54.455 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.715 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:19:54.715 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQc1xwwcMF 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YQc1xwwcMF 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3675024 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3675024 /var/tmp/bdevperf.sock 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3675024 ']' 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.975 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.975 [2024-11-26 19:57:55.706030] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:19:54.975 [2024-11-26 19:57:55.706085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675024 ] 00:19:54.975 [2024-11-26 19:57:55.789229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.236 [2024-11-26 19:57:55.817981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.806 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.806 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.806 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:19:56.066 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.067 [2024-11-26 19:57:56.857899] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.327 TLSTESTn1 00:19:56.327 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.327 Running I/O for 10 seconds... 00:19:58.650 5333.00 IOPS, 20.83 MiB/s [2024-11-26T18:58:00.414Z] 5011.00 IOPS, 19.57 MiB/s [2024-11-26T18:58:01.356Z] 5463.00 IOPS, 21.34 MiB/s [2024-11-26T18:58:02.296Z] 5449.25 IOPS, 21.29 MiB/s [2024-11-26T18:58:03.236Z] 5579.00 IOPS, 21.79 MiB/s [2024-11-26T18:58:04.178Z] 5455.17 IOPS, 21.31 MiB/s [2024-11-26T18:58:05.117Z] 5565.14 IOPS, 21.74 MiB/s [2024-11-26T18:58:06.499Z] 5616.62 IOPS, 21.94 MiB/s [2024-11-26T18:58:07.069Z] 5608.44 IOPS, 21.91 MiB/s [2024-11-26T18:58:07.330Z] 5511.60 IOPS, 21.53 MiB/s 00:20:06.509 Latency(us) 00:20:06.509 [2024-11-26T18:58:07.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.510 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.510 Verification LBA range: start 0x0 length 0x2000 00:20:06.510 TLSTESTn1 : 10.01 5517.22 21.55 0.00 0.00 23166.27 5570.56 88255.15 00:20:06.510 [2024-11-26T18:58:07.331Z] =================================================================================================================== 00:20:06.510 [2024-11-26T18:58:07.331Z] Total : 5517.22 21.55 0.00 0.00 23166.27 5570.56 88255.15 00:20:06.510 { 00:20:06.510 "results": [ 00:20:06.510 { 00:20:06.510 "job": "TLSTESTn1", 00:20:06.510 "core_mask": "0x4", 00:20:06.510 "workload": "verify", 00:20:06.510 "status": "finished", 00:20:06.510 "verify_range": { 00:20:06.510 "start": 0, 00:20:06.510 "length": 8192 00:20:06.510 }, 00:20:06.510 "queue_depth": 128, 00:20:06.510 "io_size": 4096, 00:20:06.510 "runtime": 10.012841, 00:20:06.510 "iops": 5517.215343777056, 00:20:06.510 "mibps": 21.551622436629124, 00:20:06.510 "io_failed": 0, 00:20:06.510 "io_timeout": 0, 00:20:06.510 "avg_latency_us": 23166.271522304483, 00:20:06.510 "min_latency_us": 5570.56, 00:20:06.510 "max_latency_us": 88255.14666666667 00:20:06.510 } 00:20:06.510 ], 00:20:06.510 "core_count": 1 00:20:06.510 } 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3675024 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3675024 ']' 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3675024 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3675024 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3675024' 00:20:06.510 killing process with pid 3675024 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3675024 00:20:06.510 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.510 00:20:06.510 Latency(us) 00:20:06.510 [2024-11-26T18:58:07.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.510 [2024-11-26T18:58:07.331Z] =================================================================================================================== 00:20:06.510 [2024-11-26T18:58:07.331Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3675024 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.YQc1xwwcMF 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQc1xwwcMF 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQc1xwwcMF 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQc1xwwcMF 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YQc1xwwcMF 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3677350 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3677350 /var/tmp/bdevperf.sock 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3677350 ']' 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.510 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.771 [2024-11-26 19:58:07.328696] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:06.771 [2024-11-26 19:58:07.328751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3677350 ] 00:20:06.771 [2024-11-26 19:58:07.412812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.771 [2024-11-26 19:58:07.439908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.342 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.342 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.342 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:07.607 [2024-11-26 19:58:08.279291] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YQc1xwwcMF': 0100666 00:20:07.607 [2024-11-26 19:58:08.279319] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:07.607 request: 00:20:07.607 { 00:20:07.607 "name": "key0", 00:20:07.607 "path": "/tmp/tmp.YQc1xwwcMF", 00:20:07.607 "method": "keyring_file_add_key", 00:20:07.607 "req_id": 1 00:20:07.607 } 00:20:07.607 Got JSON-RPC error response 00:20:07.607 response: 00:20:07.607 { 00:20:07.607 "code": -1, 00:20:07.607 "message": "Operation not permitted" 00:20:07.607 } 00:20:07.607 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.916 [2024-11-26 19:58:08.455806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.916 [2024-11-26 19:58:08.455829] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:07.916 request: 00:20:07.916 { 00:20:07.916 "name": "TLSTEST", 00:20:07.916 "trtype": "tcp", 00:20:07.916 "traddr": "10.0.0.2", 00:20:07.916 "adrfam": "ipv4", 00:20:07.916 "trsvcid": "4420", 00:20:07.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.916 "prchk_reftag": false, 00:20:07.916 "prchk_guard": false, 00:20:07.916 "hdgst": false, 00:20:07.916 "ddgst": false, 00:20:07.916 "psk": "key0", 00:20:07.916 "allow_unrecognized_csi": false, 00:20:07.916 "method": "bdev_nvme_attach_controller", 00:20:07.916 "req_id": 1 00:20:07.916 } 00:20:07.916 Got JSON-RPC error response 00:20:07.916 response: 00:20:07.916 { 00:20:07.916 "code": -126, 00:20:07.916 "message": "Required key not available" 00:20:07.916 } 00:20:07.916 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3677350 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3677350 ']' 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3677350 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677350 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677350' 00:20:07.917 killing process with pid 3677350 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3677350 00:20:07.917 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.917 00:20:07.917 Latency(us) 00:20:07.917 [2024-11-26T18:58:08.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.917 [2024-11-26T18:58:08.738Z] =================================================================================================================== 00:20:07.917 [2024-11-26T18:58:08.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3677350 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3674658 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3674658 ']' 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3674658 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.917 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3674658 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3674658' 00:20:08.257 killing process with pid 3674658 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3674658 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3674658 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3677620 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3677620 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3677620 ']' 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.257 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.257 [2024-11-26 19:58:08.890136] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:08.257 [2024-11-26 19:58:08.890202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.257 [2024-11-26 19:58:08.980240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.257 [2024-11-26 19:58:09.010063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.257 [2024-11-26 19:58:09.010091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.257 [2024-11-26 19:58:09.010097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.257 [2024-11-26 19:58:09.010102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.257 [2024-11-26 19:58:09.010106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.257 [2024-11-26 19:58:09.010595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YQc1xwwcMF 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.212 [2024-11-26 19:58:09.871617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.212 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.472 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.472 [2024-11-26 19:58:10.244536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.472 [2024-11-26 19:58:10.244741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.472 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.733 malloc0 00:20:09.733 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.994 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:09.994 [2024-11-26 19:58:10.795635] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YQc1xwwcMF': 0100666 00:20:09.994 [2024-11-26 19:58:10.795656] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:09.994 request: 00:20:09.994 { 00:20:09.994 "name": "key0", 00:20:09.994 "path": "/tmp/tmp.YQc1xwwcMF", 00:20:09.994 "method": "keyring_file_add_key", 00:20:09.994 "req_id": 1 00:20:09.994 } 00:20:09.994 Got JSON-RPC error response 00:20:09.994 response: 00:20:09.994 { 00:20:09.994 "code": -1, 00:20:09.994 "message": "Operation not permitted" 00:20:09.994 } 00:20:09.994 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.255 [2024-11-26 19:58:10.964070] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:10.255 [2024-11-26 19:58:10.964096] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:10.255 request: 00:20:10.255 { 00:20:10.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.255 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.255 "psk": "key0", 00:20:10.255 "method": "nvmf_subsystem_add_host", 00:20:10.255 "req_id": 1 00:20:10.255 } 00:20:10.255 Got JSON-RPC error response 00:20:10.255 response: 00:20:10.255 { 00:20:10.255 "code": -32603, 00:20:10.255 "message": "Internal error" 00:20:10.255 } 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3677620 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3677620 ']' 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3677620 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.255 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677620 00:20:10.255 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.255 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.255 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677620' 00:20:10.255 killing process with pid 3677620 00:20:10.255 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3677620 00:20:10.255 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3677620 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.YQc1xwwcMF 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3678096 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3678096 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3678096 ']' 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.517 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.517 [2024-11-26 19:58:11.212554] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:10.517 [2024-11-26 19:58:11.212607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.517 [2024-11-26 19:58:11.303731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.517 [2024-11-26 19:58:11.332305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.517 [2024-11-26 19:58:11.332338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.517 [2024-11-26 19:58:11.332345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.517 [2024-11-26 19:58:11.332350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.517 [2024-11-26 19:58:11.332354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.517 [2024-11-26 19:58:11.332822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.459 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YQc1xwwcMF 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.459 [2024-11-26 19:58:12.197703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.459 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.721 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.983 [2024-11-26 19:58:12.558602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.983 [2024-11-26 19:58:12.558804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.983 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.983 malloc0 00:20:11.983 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.245 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3678463 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3678463 /var/tmp/bdevperf.sock 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3678463 ']' 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.505 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.766 [2024-11-26 19:58:13.338008] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:12.766 [2024-11-26 19:58:13.338061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678463 ] 00:20:12.766 [2024-11-26 19:58:13.424821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.766 [2024-11-26 19:58:13.459736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.702 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.702 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.702 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:13.702 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.702 [2024-11-26 19:58:14.488263] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.961 TLSTESTn1 00:20:13.961 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:14.221 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:14.221 "subsystems": [ 00:20:14.221 { 00:20:14.221 "subsystem": "keyring", 00:20:14.221 "config": [ 00:20:14.221 { 00:20:14.221 "method": "keyring_file_add_key", 00:20:14.221 "params": { 00:20:14.221 "name": "key0", 00:20:14.221 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:14.221 } 00:20:14.221 } 00:20:14.221 ] 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "subsystem": "iobuf", 00:20:14.221 "config": [ 00:20:14.221 { 00:20:14.221 "method": "iobuf_set_options", 00:20:14.221 "params": { 00:20:14.221 "small_pool_count": 8192, 00:20:14.221 "large_pool_count": 1024, 00:20:14.221 "small_bufsize": 8192, 00:20:14.221 "large_bufsize": 135168, 00:20:14.221 "enable_numa": false 00:20:14.221 } 00:20:14.221 } 00:20:14.221 ] 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "subsystem": "sock", 00:20:14.221 "config": [ 00:20:14.221 { 00:20:14.221 "method": "sock_set_default_impl", 00:20:14.221 "params": { 00:20:14.221 "impl_name": "posix" 00:20:14.221 } 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "method": "sock_impl_set_options", 00:20:14.221 "params": { 00:20:14.221 "impl_name": "ssl", 00:20:14.221 "recv_buf_size": 4096, 00:20:14.221 "send_buf_size": 4096, 00:20:14.221 "enable_recv_pipe": true, 00:20:14.221 "enable_quickack": false, 00:20:14.221 "enable_placement_id": 0, 00:20:14.221 "enable_zerocopy_send_server": true, 00:20:14.221 "enable_zerocopy_send_client": false, 00:20:14.221 "zerocopy_threshold": 0, 00:20:14.221 "tls_version": 0, 00:20:14.221 "enable_ktls": false 00:20:14.221 } 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "method": "sock_impl_set_options", 00:20:14.221 "params": { 00:20:14.221 "impl_name": "posix", 00:20:14.221 "recv_buf_size": 2097152, 00:20:14.221 "send_buf_size": 2097152, 00:20:14.221 "enable_recv_pipe": true, 00:20:14.221 "enable_quickack": false, 00:20:14.221 "enable_placement_id": 0, 00:20:14.221 "enable_zerocopy_send_server": true, 00:20:14.221 "enable_zerocopy_send_client": false, 00:20:14.221 "zerocopy_threshold": 0, 00:20:14.221 "tls_version": 0, 00:20:14.221 "enable_ktls": false 00:20:14.221 } 00:20:14.221 } 00:20:14.221 ] 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "subsystem": "vmd", 00:20:14.221 "config": [] 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "subsystem": "accel", 00:20:14.221 "config": [ 00:20:14.221 { 00:20:14.221 "method": "accel_set_options", 00:20:14.221 "params": { 00:20:14.221 "small_cache_size": 128, 00:20:14.221 "large_cache_size": 16, 00:20:14.221 "task_count": 2048, 00:20:14.221 "sequence_count": 2048, 00:20:14.221 "buf_count": 2048 00:20:14.221 } 00:20:14.221 } 00:20:14.221 ] 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "subsystem": "bdev", 00:20:14.221 "config": [ 00:20:14.221 { 00:20:14.221 "method": "bdev_set_options", 00:20:14.221 "params": { 00:20:14.221 "bdev_io_pool_size": 65535, 00:20:14.221 "bdev_io_cache_size": 256, 00:20:14.221 "bdev_auto_examine": true, 00:20:14.221 "iobuf_small_cache_size": 128, 00:20:14.221 "iobuf_large_cache_size": 16 00:20:14.221 } 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "method": "bdev_raid_set_options", 00:20:14.222 "params": { 00:20:14.222 "process_window_size_kb": 1024, 00:20:14.222 "process_max_bandwidth_mb_sec": 0 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "bdev_iscsi_set_options", 00:20:14.222 "params": { 00:20:14.222 "timeout_sec": 30 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "bdev_nvme_set_options", 00:20:14.222 "params": { 00:20:14.222 "action_on_timeout": "none", 00:20:14.222 "timeout_us": 0, 00:20:14.222 "timeout_admin_us": 0, 00:20:14.222 "keep_alive_timeout_ms": 10000, 00:20:14.222 "arbitration_burst": 0, 00:20:14.222 "low_priority_weight": 0, 00:20:14.222 "medium_priority_weight": 0, 00:20:14.222 "high_priority_weight": 0, 00:20:14.222 "nvme_adminq_poll_period_us": 10000, 00:20:14.222 "nvme_ioq_poll_period_us": 0, 00:20:14.222 "io_queue_requests": 0, 00:20:14.222 "delay_cmd_submit": true, 00:20:14.222 "transport_retry_count": 4, 00:20:14.222 "bdev_retry_count": 3, 00:20:14.222 "transport_ack_timeout": 0, 00:20:14.222 "ctrlr_loss_timeout_sec": 0, 00:20:14.222 "reconnect_delay_sec": 0, 00:20:14.222 "fast_io_fail_timeout_sec": 0, 00:20:14.222 "disable_auto_failback": false, 00:20:14.222 "generate_uuids": false, 00:20:14.222 "transport_tos": 0, 00:20:14.222 "nvme_error_stat": false, 00:20:14.222 "rdma_srq_size": 0, 00:20:14.222 "io_path_stat": false, 00:20:14.222 "allow_accel_sequence": false, 00:20:14.222 "rdma_max_cq_size": 0, 00:20:14.222 "rdma_cm_event_timeout_ms": 0, 00:20:14.222 "dhchap_digests": [ 00:20:14.222 "sha256", 00:20:14.222 "sha384", 00:20:14.222 "sha512" 00:20:14.222 ], 00:20:14.222 "dhchap_dhgroups": [ 00:20:14.222 "null", 00:20:14.222 "ffdhe2048", 00:20:14.222 "ffdhe3072", 00:20:14.222 "ffdhe4096", 00:20:14.222 "ffdhe6144", 00:20:14.222 "ffdhe8192" 00:20:14.222 ] 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "bdev_nvme_set_hotplug", 00:20:14.222 "params": { 00:20:14.222 "period_us": 100000, 00:20:14.222 "enable": false 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "bdev_malloc_create", 00:20:14.222 "params": { 00:20:14.222 "name": "malloc0", 00:20:14.222 "num_blocks": 8192, 00:20:14.222 "block_size": 4096, 00:20:14.222 "physical_block_size": 4096, 00:20:14.222 "uuid": "998b06ec-d481-4321-bd05-f5422990ca79", 00:20:14.222 "optimal_io_boundary": 0, 00:20:14.222 "md_size": 0, 00:20:14.222 "dif_type": 0, 00:20:14.222 "dif_is_head_of_md": false, 00:20:14.222 "dif_pi_format": 0 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "bdev_wait_for_examine" 00:20:14.222 } 00:20:14.222 ] 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "subsystem": "nbd", 00:20:14.222 "config": [] 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "subsystem": "scheduler", 00:20:14.222 "config": [ 00:20:14.222 { 00:20:14.222 "method": "framework_set_scheduler", 00:20:14.222 "params": { 00:20:14.222 "name": "static" 00:20:14.222 } 00:20:14.222 } 00:20:14.222 ] 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "subsystem": "nvmf", 00:20:14.222 "config": [ 00:20:14.222 { 00:20:14.222 "method": "nvmf_set_config", 00:20:14.222 "params": { 00:20:14.222 "discovery_filter": "match_any", 00:20:14.222 "admin_cmd_passthru": { 00:20:14.222 "identify_ctrlr": false 00:20:14.222 }, 00:20:14.222 "dhchap_digests": [ 00:20:14.222 "sha256", 00:20:14.222 "sha384", 00:20:14.222 "sha512" 00:20:14.222 ], 00:20:14.222 "dhchap_dhgroups": [ 00:20:14.222 "null", 00:20:14.222 "ffdhe2048", 00:20:14.222 "ffdhe3072", 00:20:14.222 "ffdhe4096", 00:20:14.222 "ffdhe6144", 00:20:14.222 "ffdhe8192" 00:20:14.222 ] 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_set_max_subsystems", 00:20:14.222 "params": { 00:20:14.222 "max_subsystems": 1024 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_set_crdt", 00:20:14.222 "params": { 00:20:14.222 "crdt1": 0, 00:20:14.222 "crdt2": 0, 00:20:14.222 "crdt3": 0 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_create_transport", 00:20:14.222 "params": { 00:20:14.222 "trtype": "TCP", 00:20:14.222 "max_queue_depth": 128, 00:20:14.222 "max_io_qpairs_per_ctrlr": 127, 00:20:14.222 "in_capsule_data_size": 4096, 00:20:14.222 "max_io_size": 131072, 00:20:14.222 "io_unit_size": 131072, 00:20:14.222 "max_aq_depth": 128, 00:20:14.222 "num_shared_buffers": 511, 00:20:14.222 "buf_cache_size": 4294967295, 00:20:14.222 "dif_insert_or_strip": false, 00:20:14.222 "zcopy": false, 00:20:14.222 "c2h_success": false, 00:20:14.222 "sock_priority": 0, 00:20:14.222 "abort_timeout_sec": 1, 00:20:14.222 "ack_timeout": 0, 00:20:14.222 "data_wr_pool_size": 0 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_create_subsystem", 00:20:14.222 "params": { 00:20:14.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.222 "allow_any_host": false, 00:20:14.222 "serial_number": "SPDK00000000000001", 00:20:14.222 "model_number": "SPDK bdev Controller", 00:20:14.222 "max_namespaces": 10, 00:20:14.222 "min_cntlid": 1, 00:20:14.222 "max_cntlid": 65519, 00:20:14.222 "ana_reporting": false 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_subsystem_add_host", 00:20:14.222 "params": { 00:20:14.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.222 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.222 "psk": "key0" 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_subsystem_add_ns", 00:20:14.222 "params": { 00:20:14.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.222 "namespace": { 00:20:14.222 "nsid": 1, 00:20:14.222 "bdev_name": "malloc0", 00:20:14.222 "nguid": "998B06ECD4814321BD05F5422990CA79", 00:20:14.222 "uuid": "998b06ec-d481-4321-bd05-f5422990ca79", 00:20:14.222 "no_auto_visible": false 00:20:14.222 } 00:20:14.222 } 00:20:14.222 }, 00:20:14.222 { 00:20:14.222 "method": "nvmf_subsystem_add_listener", 00:20:14.222 "params": { 00:20:14.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.222 "listen_address": { 00:20:14.222 "trtype": "TCP", 00:20:14.222 "adrfam": "IPv4", 00:20:14.222 "traddr": "10.0.0.2", 00:20:14.222 "trsvcid": "4420" 00:20:14.222 }, 00:20:14.222 "secure_channel": true 00:20:14.222 } 00:20:14.222 } 00:20:14.222 ] 00:20:14.222 } 00:20:14.222 ] 00:20:14.222 }' 00:20:14.222 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:14.483 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:14.483 "subsystems": [ 00:20:14.483 { 00:20:14.483 "subsystem": "keyring", 00:20:14.483 "config": [ 00:20:14.483 { 00:20:14.483 "method": "keyring_file_add_key", 00:20:14.483 "params": { 00:20:14.483 "name": "key0", 00:20:14.483 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:14.483 } 00:20:14.483 } 00:20:14.483 ] 00:20:14.483 }, 00:20:14.483 { 00:20:14.483 "subsystem": "iobuf", 00:20:14.483 "config": [ 00:20:14.483 { 00:20:14.483 "method": "iobuf_set_options", 00:20:14.483 "params": { 00:20:14.483 "small_pool_count": 8192, 00:20:14.483 "large_pool_count": 1024, 00:20:14.483 "small_bufsize": 8192, 00:20:14.483 "large_bufsize": 135168, 00:20:14.483 "enable_numa": false 00:20:14.483 } 00:20:14.483 } 00:20:14.483 ] 00:20:14.483 }, 00:20:14.483 { 00:20:14.483 "subsystem": "sock", 00:20:14.483 "config": [ 00:20:14.483 { 00:20:14.483 "method": "sock_set_default_impl", 00:20:14.483 "params": { 00:20:14.483 "impl_name": "posix" 00:20:14.483 } 00:20:14.483 }, 00:20:14.483 { 00:20:14.483 "method": "sock_impl_set_options", 00:20:14.483 "params": { 00:20:14.483 "impl_name": "ssl", 00:20:14.483 "recv_buf_size": 4096, 00:20:14.483 "send_buf_size": 4096, 00:20:14.483 "enable_recv_pipe": true, 00:20:14.483 "enable_quickack": false, 00:20:14.483 "enable_placement_id": 0, 00:20:14.483 "enable_zerocopy_send_server": true, 00:20:14.483 "enable_zerocopy_send_client": false, 00:20:14.483 "zerocopy_threshold": 0, 00:20:14.483 "tls_version": 0, 00:20:14.483 "enable_ktls": false 00:20:14.483 } 00:20:14.483 }, 00:20:14.483 { 00:20:14.483 "method": "sock_impl_set_options", 00:20:14.483 "params": { 00:20:14.483 "impl_name": "posix", 00:20:14.483 "recv_buf_size": 2097152, 00:20:14.483 "send_buf_size": 2097152, 00:20:14.483 "enable_recv_pipe": true, 00:20:14.483 "enable_quickack": false, 00:20:14.483 "enable_placement_id": 0, 00:20:14.483 "enable_zerocopy_send_server": true, 00:20:14.483 "enable_zerocopy_send_client": false, 00:20:14.483 "zerocopy_threshold": 0, 00:20:14.483 "tls_version": 0, 00:20:14.483 "enable_ktls": false 00:20:14.483 } 00:20:14.483 } 00:20:14.484 ] 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "subsystem": "vmd", 00:20:14.484 "config": [] 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "subsystem": "accel", 00:20:14.484 "config": [ 00:20:14.484 { 00:20:14.484 "method": "accel_set_options", 00:20:14.484 "params": { 00:20:14.484 "small_cache_size": 128, 00:20:14.484 "large_cache_size": 16, 00:20:14.484 "task_count": 2048, 00:20:14.484 "sequence_count": 2048, 00:20:14.484 "buf_count": 2048 00:20:14.484 } 00:20:14.484 } 00:20:14.484 ] 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "subsystem": "bdev", 00:20:14.484 "config": [ 00:20:14.484 { 00:20:14.484 "method": "bdev_set_options", 00:20:14.484 "params": { 00:20:14.484 "bdev_io_pool_size": 65535, 00:20:14.484 "bdev_io_cache_size": 256, 00:20:14.484 "bdev_auto_examine": true, 00:20:14.484 "iobuf_small_cache_size": 128, 00:20:14.484 "iobuf_large_cache_size": 16 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_raid_set_options", 00:20:14.484 "params": { 00:20:14.484 "process_window_size_kb": 1024, 00:20:14.484 "process_max_bandwidth_mb_sec": 0 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_iscsi_set_options", 00:20:14.484 "params": { 00:20:14.484 "timeout_sec": 30 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_nvme_set_options", 00:20:14.484 "params": { 00:20:14.484 "action_on_timeout": "none", 00:20:14.484 "timeout_us": 0, 00:20:14.484 "timeout_admin_us": 0, 00:20:14.484 "keep_alive_timeout_ms": 10000, 00:20:14.484 "arbitration_burst": 0, 00:20:14.484 "low_priority_weight": 0, 00:20:14.484 "medium_priority_weight": 0, 00:20:14.484 "high_priority_weight": 0, 00:20:14.484 "nvme_adminq_poll_period_us": 10000, 00:20:14.484 "nvme_ioq_poll_period_us": 0, 00:20:14.484 "io_queue_requests": 512, 00:20:14.484 "delay_cmd_submit": true, 00:20:14.484 "transport_retry_count": 4, 00:20:14.484 "bdev_retry_count": 3, 00:20:14.484 "transport_ack_timeout": 0, 00:20:14.484 "ctrlr_loss_timeout_sec": 0, 00:20:14.484 "reconnect_delay_sec": 0, 00:20:14.484 "fast_io_fail_timeout_sec": 0, 00:20:14.484 "disable_auto_failback": false, 00:20:14.484 "generate_uuids": false, 00:20:14.484 "transport_tos": 0, 00:20:14.484 "nvme_error_stat": false, 00:20:14.484 "rdma_srq_size": 0, 00:20:14.484 "io_path_stat": false, 00:20:14.484 "allow_accel_sequence": false, 00:20:14.484 "rdma_max_cq_size": 0, 00:20:14.484 "rdma_cm_event_timeout_ms": 0, 00:20:14.484 "dhchap_digests": [ 00:20:14.484 "sha256", 00:20:14.484 "sha384", 00:20:14.484 "sha512" 00:20:14.484 ], 00:20:14.484 "dhchap_dhgroups": [ 00:20:14.484 "null", 00:20:14.484 "ffdhe2048", 00:20:14.484 "ffdhe3072", 00:20:14.484 "ffdhe4096", 00:20:14.484 "ffdhe6144", 00:20:14.484 "ffdhe8192" 00:20:14.484 ] 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_nvme_attach_controller", 00:20:14.484 "params": { 00:20:14.484 "name": "TLSTEST", 00:20:14.484 "trtype": "TCP", 00:20:14.484 "adrfam": "IPv4", 00:20:14.484 "traddr": "10.0.0.2", 00:20:14.484 "trsvcid": "4420", 00:20:14.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.484 "prchk_reftag": false, 00:20:14.484 "prchk_guard": false, 00:20:14.484 "ctrlr_loss_timeout_sec": 0, 00:20:14.484 "reconnect_delay_sec": 0, 00:20:14.484 "fast_io_fail_timeout_sec": 0, 00:20:14.484 "psk": "key0", 00:20:14.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.484 "hdgst": false, 00:20:14.484 "ddgst": false, 00:20:14.484 "multipath": "multipath" 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_nvme_set_hotplug", 00:20:14.484 "params": { 00:20:14.484 "period_us": 100000, 00:20:14.484 "enable": false 00:20:14.484 } 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "method": "bdev_wait_for_examine" 00:20:14.484 } 00:20:14.484 ] 00:20:14.484 }, 00:20:14.484 { 00:20:14.484 "subsystem": "nbd", 00:20:14.484 "config": [] 00:20:14.484 } 00:20:14.484 ] 00:20:14.484 }' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3678463 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3678463 ']' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3678463 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678463 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678463' 00:20:14.484 killing process with pid 3678463 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3678463 00:20:14.484 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.484 00:20:14.484 Latency(us) 00:20:14.484 [2024-11-26T18:58:15.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.484 [2024-11-26T18:58:15.305Z] =================================================================================================================== 00:20:14.484 [2024-11-26T18:58:15.305Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3678463 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3678096 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3678096 ']' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3678096 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.484 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678096 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678096' 00:20:14.745 killing process with pid 3678096 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3678096 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3678096 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.745 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:14.745 "subsystems": [ 00:20:14.745 { 00:20:14.745 "subsystem": "keyring", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "keyring_file_add_key", 00:20:14.745 "params": { 00:20:14.745 "name": "key0", 00:20:14.745 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:14.745 } 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "iobuf", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "iobuf_set_options", 00:20:14.745 "params": { 00:20:14.745 "small_pool_count": 8192, 00:20:14.745 "large_pool_count": 1024, 00:20:14.745 "small_bufsize": 8192, 00:20:14.745 "large_bufsize": 135168, 00:20:14.745 "enable_numa": false 00:20:14.745 } 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "sock", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "sock_set_default_impl", 00:20:14.745 "params": { 00:20:14.745 "impl_name": "posix" 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "sock_impl_set_options", 00:20:14.745 "params": { 00:20:14.745 "impl_name": "ssl", 00:20:14.745 "recv_buf_size": 4096, 00:20:14.745 "send_buf_size": 4096, 00:20:14.745 "enable_recv_pipe": true, 00:20:14.745 "enable_quickack": false, 00:20:14.745 "enable_placement_id": 0, 00:20:14.745 "enable_zerocopy_send_server": true, 00:20:14.745 "enable_zerocopy_send_client": false, 00:20:14.745 "zerocopy_threshold": 0, 00:20:14.745 "tls_version": 0, 00:20:14.745 "enable_ktls": false 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "sock_impl_set_options", 00:20:14.745 "params": { 00:20:14.745 "impl_name": "posix", 00:20:14.745 "recv_buf_size": 2097152, 00:20:14.745 "send_buf_size": 2097152, 00:20:14.745 "enable_recv_pipe": true, 00:20:14.745 "enable_quickack": false, 00:20:14.745 "enable_placement_id": 0, 00:20:14.745 "enable_zerocopy_send_server": true, 00:20:14.745 "enable_zerocopy_send_client": false, 00:20:14.745 "zerocopy_threshold": 0, 00:20:14.745 "tls_version": 0, 00:20:14.745 "enable_ktls": false 00:20:14.745 } 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "vmd", 00:20:14.745 "config": [] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "accel", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "accel_set_options", 00:20:14.745 "params": { 00:20:14.745 "small_cache_size": 128, 00:20:14.745 "large_cache_size": 16, 00:20:14.745 "task_count": 2048, 00:20:14.745 "sequence_count": 2048, 00:20:14.745 "buf_count": 2048 00:20:14.745 } 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "bdev", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "bdev_set_options", 00:20:14.745 "params": { 00:20:14.745 "bdev_io_pool_size": 65535, 00:20:14.745 "bdev_io_cache_size": 256, 00:20:14.745 "bdev_auto_examine": true, 00:20:14.745 "iobuf_small_cache_size": 128, 00:20:14.745 "iobuf_large_cache_size": 16 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_raid_set_options", 00:20:14.745 "params": { 00:20:14.745 "process_window_size_kb": 1024, 00:20:14.745 "process_max_bandwidth_mb_sec": 0 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_iscsi_set_options", 00:20:14.745 "params": { 00:20:14.745 "timeout_sec": 30 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_nvme_set_options", 00:20:14.745 "params": { 00:20:14.745 "action_on_timeout": "none", 00:20:14.745 "timeout_us": 0, 00:20:14.745 "timeout_admin_us": 0, 00:20:14.745 "keep_alive_timeout_ms": 10000, 00:20:14.745 "arbitration_burst": 0, 00:20:14.745 "low_priority_weight": 0, 00:20:14.745 "medium_priority_weight": 0, 00:20:14.745 "high_priority_weight": 0, 00:20:14.745 "nvme_adminq_poll_period_us": 10000, 00:20:14.745 "nvme_ioq_poll_period_us": 0, 00:20:14.745 "io_queue_requests": 0, 00:20:14.745 "delay_cmd_submit": true, 00:20:14.745 "transport_retry_count": 4, 00:20:14.745 "bdev_retry_count": 3, 00:20:14.745 "transport_ack_timeout": 0, 00:20:14.745 "ctrlr_loss_timeout_sec": 0, 00:20:14.745 "reconnect_delay_sec": 0, 00:20:14.745 "fast_io_fail_timeout_sec": 0, 00:20:14.745 "disable_auto_failback": false, 00:20:14.745 "generate_uuids": false, 00:20:14.745 "transport_tos": 0, 00:20:14.745 "nvme_error_stat": false, 00:20:14.745 "rdma_srq_size": 0, 00:20:14.745 "io_path_stat": false, 00:20:14.745 "allow_accel_sequence": false, 00:20:14.745 "rdma_max_cq_size": 0, 00:20:14.745 "rdma_cm_event_timeout_ms": 0, 00:20:14.745 "dhchap_digests": [ 00:20:14.745 "sha256", 00:20:14.745 "sha384", 00:20:14.745 "sha512" 00:20:14.745 ], 00:20:14.745 "dhchap_dhgroups": [ 00:20:14.745 "null", 00:20:14.745 "ffdhe2048", 00:20:14.745 "ffdhe3072", 00:20:14.745 "ffdhe4096", 00:20:14.745 "ffdhe6144", 00:20:14.745 "ffdhe8192" 00:20:14.745 ] 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_nvme_set_hotplug", 00:20:14.745 "params": { 00:20:14.745 "period_us": 100000, 00:20:14.745 "enable": false 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_malloc_create", 00:20:14.745 "params": { 00:20:14.745 "name": "malloc0", 00:20:14.745 "num_blocks": 8192, 00:20:14.745 "block_size": 4096, 00:20:14.745 "physical_block_size": 4096, 00:20:14.745 "uuid": "998b06ec-d481-4321-bd05-f5422990ca79", 00:20:14.745 "optimal_io_boundary": 0, 00:20:14.745 "md_size": 0, 00:20:14.745 "dif_type": 0, 00:20:14.745 "dif_is_head_of_md": false, 00:20:14.745 "dif_pi_format": 0 00:20:14.745 } 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "method": "bdev_wait_for_examine" 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "nbd", 00:20:14.745 "config": [] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "scheduler", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "framework_set_scheduler", 00:20:14.745 "params": { 00:20:14.745 "name": "static" 00:20:14.745 } 00:20:14.745 } 00:20:14.745 ] 00:20:14.745 }, 00:20:14.745 { 00:20:14.745 "subsystem": "nvmf", 00:20:14.745 "config": [ 00:20:14.745 { 00:20:14.745 "method": "nvmf_set_config", 00:20:14.745 "params": { 00:20:14.745 "discovery_filter": "match_any", 00:20:14.745 "admin_cmd_passthru": { 00:20:14.745 "identify_ctrlr": false 00:20:14.745 }, 00:20:14.745 "dhchap_digests": [ 00:20:14.745 "sha256", 00:20:14.746 "sha384", 00:20:14.746 "sha512" 00:20:14.746 ], 00:20:14.746 "dhchap_dhgroups": [ 00:20:14.746 "null", 00:20:14.746 "ffdhe2048", 00:20:14.746 "ffdhe3072", 00:20:14.746 "ffdhe4096", 00:20:14.746 "ffdhe6144", 00:20:14.746 "ffdhe8192" 00:20:14.746 ] 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_set_max_subsystems", 00:20:14.746 "params": { 00:20:14.746 "max_subsystems": 1024 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_set_crdt", 00:20:14.746 "params": { 00:20:14.746 "crdt1": 0, 00:20:14.746 "crdt2": 0, 00:20:14.746 "crdt3": 0 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_create_transport", 00:20:14.746 "params": { 00:20:14.746 "trtype": "TCP", 00:20:14.746 "max_queue_depth": 128, 00:20:14.746 "max_io_qpairs_per_ctrlr": 127, 00:20:14.746 "in_capsule_data_size": 4096, 00:20:14.746 "max_io_size": 131072, 00:20:14.746 "io_unit_size": 131072, 00:20:14.746 "max_aq_depth": 128, 00:20:14.746 "num_shared_buffers": 511, 00:20:14.746 "buf_cache_size": 4294967295, 00:20:14.746 "dif_insert_or_strip": false, 00:20:14.746 "zcopy": false, 00:20:14.746 "c2h_success": false, 00:20:14.746 "sock_priority": 0, 00:20:14.746 "abort_timeout_sec": 1, 00:20:14.746 "ack_timeout": 0, 00:20:14.746 "data_wr_pool_size": 0 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_create_subsystem", 00:20:14.746 "params": { 00:20:14.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.746 "allow_any_host": false, 00:20:14.746 "serial_number": "SPDK00000000000001", 00:20:14.746 "model_number": "SPDK bdev Controller", 00:20:14.746 "max_namespaces": 10, 00:20:14.746 "min_cntlid": 1, 00:20:14.746 "max_cntlid": 65519, 00:20:14.746 "ana_reporting": false 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_subsystem_add_host", 00:20:14.746 "params": { 00:20:14.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.746 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.746 "psk": "key0" 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_subsystem_add_ns", 00:20:14.746 "params": { 00:20:14.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.746 "namespace": { 00:20:14.746 "nsid": 1, 00:20:14.746 "bdev_name": "malloc0", 00:20:14.746 "nguid": "998B06ECD4814321BD05F5422990CA79", 00:20:14.746 "uuid": "998b06ec-d481-4321-bd05-f5422990ca79", 00:20:14.746 "no_auto_visible": false 00:20:14.746 } 00:20:14.746 } 00:20:14.746 }, 00:20:14.746 { 00:20:14.746 "method": "nvmf_subsystem_add_listener", 00:20:14.746 "params": { 00:20:14.746 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.746 "listen_address": { 00:20:14.746 "trtype": "TCP", 00:20:14.746 "adrfam": "IPv4", 00:20:14.746 "traddr": "10.0.0.2", 00:20:14.746 "trsvcid": "4420" 00:20:14.746 }, 00:20:14.746 "secure_channel": true 00:20:14.746 } 00:20:14.746 } 00:20:14.746 ] 00:20:14.746 } 00:20:14.746 ] 00:20:14.746 }' 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3678890 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3678890 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3678890 ']' 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.746 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.746 [2024-11-26 19:58:15.517540] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:14.746 [2024-11-26 19:58:15.517594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.006 [2024-11-26 19:58:15.608748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.006 [2024-11-26 19:58:15.641442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.006 [2024-11-26 19:58:15.641476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.006 [2024-11-26 19:58:15.641482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.006 [2024-11-26 19:58:15.641486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.006 [2024-11-26 19:58:15.641491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.006 [2024-11-26 19:58:15.641989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.265 [2024-11-26 19:58:15.835972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.265 [2024-11-26 19:58:15.868000] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.265 [2024-11-26 19:58:15.868216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.524 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.524 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.524 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.524 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.524 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3679172 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3679172 /var/tmp/bdevperf.sock 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3679172 ']' 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.784 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:15.784 "subsystems": [ 00:20:15.784 { 00:20:15.784 "subsystem": "keyring", 00:20:15.784 "config": [ 00:20:15.784 { 00:20:15.784 "method": "keyring_file_add_key", 00:20:15.784 "params": { 00:20:15.784 "name": "key0", 00:20:15.784 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:15.784 } 00:20:15.784 } 00:20:15.784 ] 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "subsystem": "iobuf", 00:20:15.784 "config": [ 00:20:15.784 { 00:20:15.784 "method": "iobuf_set_options", 00:20:15.784 "params": { 00:20:15.784 "small_pool_count": 8192, 00:20:15.784 "large_pool_count": 1024, 00:20:15.784 "small_bufsize": 8192, 00:20:15.784 "large_bufsize": 135168, 00:20:15.784 "enable_numa": false 00:20:15.784 } 00:20:15.784 } 00:20:15.784 ] 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "subsystem": "sock", 00:20:15.784 "config": [ 00:20:15.784 { 00:20:15.784 "method": "sock_set_default_impl", 00:20:15.784 "params": { 00:20:15.784 "impl_name": "posix" 00:20:15.784 } 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "method": "sock_impl_set_options", 00:20:15.784 "params": { 00:20:15.784 "impl_name": "ssl", 00:20:15.784 "recv_buf_size": 4096, 00:20:15.784 "send_buf_size": 4096, 00:20:15.784 "enable_recv_pipe": true, 00:20:15.784 "enable_quickack": false, 00:20:15.784 "enable_placement_id": 0, 00:20:15.784 "enable_zerocopy_send_server": true, 00:20:15.784 "enable_zerocopy_send_client": false, 00:20:15.784 "zerocopy_threshold": 0, 00:20:15.784 "tls_version": 0, 00:20:15.784 "enable_ktls": false 00:20:15.784 } 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "method": "sock_impl_set_options", 00:20:15.784 "params": { 00:20:15.784 "impl_name": "posix", 00:20:15.784 "recv_buf_size": 2097152, 00:20:15.784 "send_buf_size": 2097152, 00:20:15.784 "enable_recv_pipe": true, 00:20:15.784 "enable_quickack": false, 00:20:15.784 "enable_placement_id": 0, 00:20:15.784 "enable_zerocopy_send_server": true, 00:20:15.784 "enable_zerocopy_send_client": false, 00:20:15.784 "zerocopy_threshold": 0, 00:20:15.784 "tls_version": 0, 00:20:15.784 "enable_ktls": false 00:20:15.784 } 00:20:15.784 } 00:20:15.784 ] 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "subsystem": "vmd", 00:20:15.784 "config": [] 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "subsystem": "accel", 00:20:15.784 "config": [ 00:20:15.784 { 00:20:15.784 "method": "accel_set_options", 00:20:15.784 "params": { 00:20:15.784 "small_cache_size": 128, 00:20:15.784 "large_cache_size": 16, 00:20:15.784 "task_count": 2048, 00:20:15.784 "sequence_count": 2048, 00:20:15.784 "buf_count": 2048 00:20:15.784 } 00:20:15.784 } 00:20:15.784 ] 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "subsystem": "bdev", 00:20:15.784 "config": [ 00:20:15.784 { 00:20:15.784 "method": "bdev_set_options", 00:20:15.784 "params": { 00:20:15.784 "bdev_io_pool_size": 65535, 00:20:15.784 "bdev_io_cache_size": 256, 00:20:15.784 "bdev_auto_examine": true, 00:20:15.784 "iobuf_small_cache_size": 128, 00:20:15.784 "iobuf_large_cache_size": 16 00:20:15.784 } 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "method": "bdev_raid_set_options", 00:20:15.784 "params": { 00:20:15.784 "process_window_size_kb": 1024, 00:20:15.784 "process_max_bandwidth_mb_sec": 0 00:20:15.784 } 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "method": "bdev_iscsi_set_options", 00:20:15.784 "params": { 00:20:15.784 "timeout_sec": 30 00:20:15.784 } 00:20:15.784 }, 00:20:15.784 { 00:20:15.784 "method": "bdev_nvme_set_options", 00:20:15.784 "params": { 00:20:15.784 "action_on_timeout": "none", 00:20:15.784 "timeout_us": 0, 00:20:15.784 "timeout_admin_us": 0, 00:20:15.784 "keep_alive_timeout_ms": 10000, 00:20:15.784 "arbitration_burst": 0, 00:20:15.784 "low_priority_weight": 0, 00:20:15.784 "medium_priority_weight": 0, 00:20:15.784 "high_priority_weight": 0, 00:20:15.784 "nvme_adminq_poll_period_us": 10000, 00:20:15.784 "nvme_ioq_poll_period_us": 0, 00:20:15.784 "io_queue_requests": 512, 00:20:15.784 "delay_cmd_submit": true, 00:20:15.784 "transport_retry_count": 4, 00:20:15.784 "bdev_retry_count": 3, 00:20:15.784 "transport_ack_timeout": 0, 00:20:15.784 "ctrlr_loss_timeout_sec": 0, 00:20:15.784 "reconnect_delay_sec": 0, 00:20:15.784 "fast_io_fail_timeout_sec": 0, 00:20:15.784 "disable_auto_failback": false, 00:20:15.784 "generate_uuids": false, 00:20:15.784 "transport_tos": 0, 00:20:15.784 "nvme_error_stat": false, 00:20:15.784 "rdma_srq_size": 0, 00:20:15.784 "io_path_stat": false, 00:20:15.784 "allow_accel_sequence": false, 00:20:15.784 "rdma_max_cq_size": 0, 00:20:15.784 "rdma_cm_event_timeout_ms": 0, 00:20:15.784 "dhchap_digests": [ 00:20:15.784 "sha256", 00:20:15.784 "sha384", 00:20:15.784 "sha512" 00:20:15.784 ], 00:20:15.784 "dhchap_dhgroups": [ 00:20:15.784 "null", 00:20:15.784 "ffdhe2048", 00:20:15.785 "ffdhe3072", 00:20:15.785 "ffdhe4096", 00:20:15.785 "ffdhe6144", 00:20:15.785 "ffdhe8192" 00:20:15.785 ] 00:20:15.785 } 00:20:15.785 }, 00:20:15.785 { 00:20:15.785 "method": "bdev_nvme_attach_controller", 00:20:15.785 "params": { 00:20:15.785 "name": "TLSTEST", 00:20:15.785 "trtype": "TCP", 00:20:15.785 "adrfam": "IPv4", 00:20:15.785 "traddr": "10.0.0.2", 00:20:15.785 "trsvcid": "4420", 00:20:15.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.785 "prchk_reftag": false, 00:20:15.785 "prchk_guard": false, 00:20:15.785 "ctrlr_loss_timeout_sec": 0, 00:20:15.785 "reconnect_delay_sec": 0, 00:20:15.785 "fast_io_fail_timeout_sec": 0, 00:20:15.785 "psk": "key0", 00:20:15.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.785 "hdgst": false, 00:20:15.785 "ddgst": false, 00:20:15.785 "multipath": "multipath" 00:20:15.785 } 00:20:15.785 }, 00:20:15.785 { 00:20:15.785 "method": "bdev_nvme_set_hotplug", 00:20:15.785 "params": { 00:20:15.785 "period_us": 100000, 00:20:15.785 "enable": false 00:20:15.785 } 00:20:15.785 }, 00:20:15.785 { 00:20:15.785 "method": "bdev_wait_for_examine" 00:20:15.785 } 00:20:15.785 ] 00:20:15.785 }, 00:20:15.785 { 00:20:15.785 "subsystem": "nbd", 00:20:15.785 "config": [] 00:20:15.785 } 00:20:15.785 ] 00:20:15.785 }' 00:20:15.785 [2024-11-26 19:58:16.428848] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:15.785 [2024-11-26 19:58:16.428898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679172 ] 00:20:15.785 [2024-11-26 19:58:16.515075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.785 [2024-11-26 19:58:16.550488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.045 [2024-11-26 19:58:16.690937] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.616 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.616 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.616 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.616 Running I/O for 10 seconds... 00:20:18.504 4569.00 IOPS, 17.85 MiB/s [2024-11-26T18:58:20.713Z] 5053.50 IOPS, 19.74 MiB/s [2024-11-26T18:58:21.657Z] 5124.67 IOPS, 20.02 MiB/s [2024-11-26T18:58:22.597Z] 4709.75 IOPS, 18.40 MiB/s [2024-11-26T18:58:23.538Z] 4896.00 IOPS, 19.12 MiB/s [2024-11-26T18:58:24.480Z] 4790.33 IOPS, 18.71 MiB/s [2024-11-26T18:58:25.421Z] 5003.00 IOPS, 19.54 MiB/s [2024-11-26T18:58:26.364Z] 5049.88 IOPS, 19.73 MiB/s [2024-11-26T18:58:27.750Z] 5108.11 IOPS, 19.95 MiB/s [2024-11-26T18:58:27.750Z] 5186.90 IOPS, 20.26 MiB/s 00:20:26.929 Latency(us) 00:20:26.929 [2024-11-26T18:58:27.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.929 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.929 Verification LBA range: start 0x0 length 0x2000 00:20:26.929 TLSTESTn1 : 10.04 5178.80 20.23 0.00 0.00 24654.09 5843.63 39321.60 00:20:26.929 [2024-11-26T18:58:27.750Z] =================================================================================================================== 00:20:26.929 [2024-11-26T18:58:27.750Z] Total : 5178.80 20.23 0.00 0.00 24654.09 5843.63 39321.60 00:20:26.929 { 00:20:26.929 "results": [ 00:20:26.929 { 00:20:26.929 "job": "TLSTESTn1", 00:20:26.929 "core_mask": "0x4", 00:20:26.929 "workload": "verify", 00:20:26.929 "status": "finished", 00:20:26.929 "verify_range": { 00:20:26.929 "start": 0, 00:20:26.929 "length": 8192 00:20:26.929 }, 00:20:26.929 "queue_depth": 128, 00:20:26.929 "io_size": 4096, 00:20:26.929 "runtime": 10.040157, 00:20:26.929 "iops": 5178.803478869902, 00:20:26.929 "mibps": 20.229701089335556, 00:20:26.929 "io_failed": 0, 00:20:26.929 "io_timeout": 0, 00:20:26.929 "avg_latency_us": 24654.092046054826, 00:20:26.929 "min_latency_us": 5843.626666666667, 00:20:26.929 "max_latency_us": 39321.6 00:20:26.929 } 00:20:26.929 ], 00:20:26.929 "core_count": 1 00:20:26.929 } 00:20:26.929 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.929 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3679172 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3679172 ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3679172 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3679172 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3679172' 00:20:26.930 killing process with pid 3679172 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3679172 00:20:26.930 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.930 00:20:26.930 Latency(us) 00:20:26.930 [2024-11-26T18:58:27.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.930 [2024-11-26T18:58:27.751Z] =================================================================================================================== 00:20:26.930 [2024-11-26T18:58:27.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3679172 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3678890 ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3678890' 00:20:26.930 killing process with pid 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3678890 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3681320 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3681320 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3681320 ']' 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.930 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.191 [2024-11-26 19:58:27.792336] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:27.191 [2024-11-26 19:58:27.792409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.191 [2024-11-26 19:58:27.889076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.191 [2024-11-26 19:58:27.938245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.191 [2024-11-26 19:58:27.938297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.191 [2024-11-26 19:58:27.938305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.191 [2024-11-26 19:58:27.938312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.191 [2024-11-26 19:58:27.938319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.191 [2024-11-26 19:58:27.939097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.135 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.135 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.135 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.135 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.135 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.136 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.136 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.YQc1xwwcMF 00:20:28.136 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YQc1xwwcMF 00:20:28.136 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.136 [2024-11-26 19:58:28.797094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.136 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.397 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.397 [2024-11-26 19:58:29.190071] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.397 [2024-11-26 19:58:29.190412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.659 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:28.659 malloc0 00:20:28.659 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:28.920 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:29.181 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.442 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3681876 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3681876 /var/tmp/bdevperf.sock 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3681876 ']' 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.443 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.443 [2024-11-26 19:58:30.077387] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:29.443 [2024-11-26 19:58:30.077465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681876 ] 00:20:29.443 [2024-11-26 19:58:30.167585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.443 [2024-11-26 19:58:30.202277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.385 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.385 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.385 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:30.385 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:30.645 [2024-11-26 19:58:31.222025] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.645 nvme0n1 00:20:30.645 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.645 Running I/O for 1 seconds... 00:20:32.029 2824.00 IOPS, 11.03 MiB/s 00:20:32.029 Latency(us) 00:20:32.029 [2024-11-26T18:58:32.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.029 Verification LBA range: start 0x0 length 0x2000 00:20:32.029 nvme0n1 : 1.02 2892.92 11.30 0.00 0.00 43940.11 4778.67 65972.91 00:20:32.029 [2024-11-26T18:58:32.850Z] =================================================================================================================== 00:20:32.029 [2024-11-26T18:58:32.850Z] Total : 2892.92 11.30 0.00 0.00 43940.11 4778.67 65972.91 00:20:32.029 { 00:20:32.029 "results": [ 00:20:32.029 { 00:20:32.029 "job": "nvme0n1", 00:20:32.029 "core_mask": "0x2", 00:20:32.029 "workload": "verify", 00:20:32.029 "status": "finished", 00:20:32.029 "verify_range": { 00:20:32.029 "start": 0, 00:20:32.030 "length": 8192 00:20:32.030 }, 00:20:32.030 "queue_depth": 128, 00:20:32.030 "io_size": 4096, 00:20:32.030 "runtime": 1.020423, 00:20:32.030 "iops": 2892.9179369731964, 00:20:32.030 "mibps": 11.300460691301549, 00:20:32.030 "io_failed": 0, 00:20:32.030 "io_timeout": 0, 00:20:32.030 "avg_latency_us": 43940.11288166215, 00:20:32.030 "min_latency_us": 4778.666666666667, 00:20:32.030 "max_latency_us": 65972.90666666666 00:20:32.030 } 00:20:32.030 ], 00:20:32.030 "core_count": 1 00:20:32.030 } 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3681876 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3681876 ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3681876 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681876 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681876' 00:20:32.030 killing process with pid 3681876 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3681876 00:20:32.030 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.030 00:20:32.030 Latency(us) 00:20:32.030 [2024-11-26T18:58:32.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.030 [2024-11-26T18:58:32.851Z] =================================================================================================================== 00:20:32.030 [2024-11-26T18:58:32.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3681876 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3681320 ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681320' 00:20:32.030 killing process with pid 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3681320 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3682273 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3682273 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3682273 ']' 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.030 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.290 [2024-11-26 19:58:32.875635] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:32.290 [2024-11-26 19:58:32.875693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.290 [2024-11-26 19:58:32.970512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.290 [2024-11-26 19:58:33.013037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.290 [2024-11-26 19:58:33.013084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.290 [2024-11-26 19:58:33.013092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.290 [2024-11-26 19:58:33.013099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.290 [2024-11-26 19:58:33.013105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.290 [2024-11-26 19:58:33.013767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.233 [2024-11-26 19:58:33.754201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.233 malloc0 00:20:33.233 [2024-11-26 19:58:33.784260] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.233 [2024-11-26 19:58:33.784584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3682586 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3682586 /var/tmp/bdevperf.sock 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3682586 ']' 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.233 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.233 [2024-11-26 19:58:33.877055] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:33.233 [2024-11-26 19:58:33.877125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682586 ] 00:20:33.233 [2024-11-26 19:58:33.966233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.233 [2024-11-26 19:58:34.000635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.176 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.176 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.176 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQc1xwwcMF 00:20:34.176 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:34.176 [2024-11-26 19:58:34.983529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.436 nvme0n1 00:20:34.436 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:34.436 Running I/O for 1 seconds... 00:20:35.376 5733.00 IOPS, 22.39 MiB/s 00:20:35.376 Latency(us) 00:20:35.376 [2024-11-26T18:58:36.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.376 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:35.376 Verification LBA range: start 0x0 length 0x2000 00:20:35.376 nvme0n1 : 1.01 5790.46 22.62 0.00 0.00 21968.54 4778.67 23811.41 00:20:35.376 [2024-11-26T18:58:36.197Z] =================================================================================================================== 00:20:35.376 [2024-11-26T18:58:36.197Z] Total : 5790.46 22.62 0.00 0.00 21968.54 4778.67 23811.41 00:20:35.376 { 00:20:35.376 "results": [ 00:20:35.376 { 00:20:35.376 "job": "nvme0n1", 00:20:35.376 "core_mask": "0x2", 00:20:35.376 "workload": "verify", 00:20:35.376 "status": "finished", 00:20:35.376 "verify_range": { 00:20:35.376 "start": 0, 00:20:35.376 "length": 8192 00:20:35.376 }, 00:20:35.376 "queue_depth": 128, 00:20:35.376 "io_size": 4096, 00:20:35.376 "runtime": 1.012182, 00:20:35.376 "iops": 5790.460608862833, 00:20:35.376 "mibps": 22.618986753370443, 00:20:35.376 "io_failed": 0, 00:20:35.376 "io_timeout": 0, 00:20:35.376 "avg_latency_us": 21968.54412557584, 00:20:35.376 "min_latency_us": 4778.666666666667, 00:20:35.376 "max_latency_us": 23811.413333333334 00:20:35.376 } 00:20:35.376 ], 00:20:35.376 "core_count": 1 00:20:35.376 } 00:20:35.376 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:35.637 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.637 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.637 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.637 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:35.637 "subsystems": [ 00:20:35.637 { 00:20:35.637 "subsystem": "keyring", 00:20:35.637 "config": [ 00:20:35.638 { 00:20:35.638 "method": "keyring_file_add_key", 00:20:35.638 "params": { 00:20:35.638 "name": "key0", 00:20:35.638 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "iobuf", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "iobuf_set_options", 00:20:35.638 "params": { 00:20:35.638 "small_pool_count": 8192, 00:20:35.638 "large_pool_count": 1024, 00:20:35.638 "small_bufsize": 8192, 00:20:35.638 "large_bufsize": 135168, 00:20:35.638 "enable_numa": false 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "sock", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "sock_set_default_impl", 00:20:35.638 "params": { 00:20:35.638 "impl_name": "posix" 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "sock_impl_set_options", 00:20:35.638 "params": { 00:20:35.638 "impl_name": "ssl", 00:20:35.638 "recv_buf_size": 4096, 00:20:35.638 "send_buf_size": 4096, 00:20:35.638 "enable_recv_pipe": true, 00:20:35.638 "enable_quickack": false, 00:20:35.638 "enable_placement_id": 0, 00:20:35.638 "enable_zerocopy_send_server": true, 00:20:35.638 "enable_zerocopy_send_client": false, 00:20:35.638 "zerocopy_threshold": 0, 00:20:35.638 "tls_version": 0, 00:20:35.638 "enable_ktls": false 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "sock_impl_set_options", 00:20:35.638 "params": { 00:20:35.638 "impl_name": "posix", 00:20:35.638 "recv_buf_size": 2097152, 00:20:35.638 "send_buf_size": 2097152, 00:20:35.638 "enable_recv_pipe": true, 00:20:35.638 "enable_quickack": false, 00:20:35.638 "enable_placement_id": 0, 00:20:35.638 "enable_zerocopy_send_server": true, 00:20:35.638 "enable_zerocopy_send_client": false, 00:20:35.638 "zerocopy_threshold": 0, 00:20:35.638 "tls_version": 0, 00:20:35.638 "enable_ktls": false 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "vmd", 00:20:35.638 "config": [] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "accel", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "accel_set_options", 00:20:35.638 "params": { 00:20:35.638 "small_cache_size": 128, 00:20:35.638 "large_cache_size": 16, 00:20:35.638 "task_count": 2048, 00:20:35.638 "sequence_count": 2048, 00:20:35.638 "buf_count": 2048 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "bdev", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "bdev_set_options", 00:20:35.638 "params": { 00:20:35.638 "bdev_io_pool_size": 65535, 00:20:35.638 "bdev_io_cache_size": 256, 00:20:35.638 "bdev_auto_examine": true, 00:20:35.638 "iobuf_small_cache_size": 128, 00:20:35.638 "iobuf_large_cache_size": 16 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_raid_set_options", 00:20:35.638 "params": { 00:20:35.638 "process_window_size_kb": 1024, 00:20:35.638 "process_max_bandwidth_mb_sec": 0 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_iscsi_set_options", 00:20:35.638 "params": { 00:20:35.638 "timeout_sec": 30 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_nvme_set_options", 00:20:35.638 "params": { 00:20:35.638 "action_on_timeout": "none", 00:20:35.638 "timeout_us": 0, 00:20:35.638 "timeout_admin_us": 0, 00:20:35.638 "keep_alive_timeout_ms": 10000, 00:20:35.638 "arbitration_burst": 0, 00:20:35.638 "low_priority_weight": 0, 00:20:35.638 "medium_priority_weight": 0, 00:20:35.638 "high_priority_weight": 0, 00:20:35.638 "nvme_adminq_poll_period_us": 10000, 00:20:35.638 "nvme_ioq_poll_period_us": 0, 00:20:35.638 "io_queue_requests": 0, 00:20:35.638 "delay_cmd_submit": true, 00:20:35.638 "transport_retry_count": 4, 00:20:35.638 "bdev_retry_count": 3, 00:20:35.638 "transport_ack_timeout": 0, 00:20:35.638 "ctrlr_loss_timeout_sec": 0, 00:20:35.638 "reconnect_delay_sec": 0, 00:20:35.638 "fast_io_fail_timeout_sec": 0, 00:20:35.638 "disable_auto_failback": false, 00:20:35.638 "generate_uuids": false, 00:20:35.638 "transport_tos": 0, 00:20:35.638 "nvme_error_stat": false, 00:20:35.638 "rdma_srq_size": 0, 00:20:35.638 "io_path_stat": false, 00:20:35.638 "allow_accel_sequence": false, 00:20:35.638 "rdma_max_cq_size": 0, 00:20:35.638 "rdma_cm_event_timeout_ms": 0, 00:20:35.638 "dhchap_digests": [ 00:20:35.638 "sha256", 00:20:35.638 "sha384", 00:20:35.638 "sha512" 00:20:35.638 ], 00:20:35.638 "dhchap_dhgroups": [ 00:20:35.638 "null", 00:20:35.638 "ffdhe2048", 00:20:35.638 "ffdhe3072", 00:20:35.638 "ffdhe4096", 00:20:35.638 "ffdhe6144", 00:20:35.638 "ffdhe8192" 00:20:35.638 ] 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_nvme_set_hotplug", 00:20:35.638 "params": { 00:20:35.638 "period_us": 100000, 00:20:35.638 "enable": false 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_malloc_create", 00:20:35.638 "params": { 00:20:35.638 "name": "malloc0", 00:20:35.638 "num_blocks": 8192, 00:20:35.638 "block_size": 4096, 00:20:35.638 "physical_block_size": 4096, 00:20:35.638 "uuid": "b5aff8b6-55b5-42b7-b564-b81039420a23", 00:20:35.638 "optimal_io_boundary": 0, 00:20:35.638 "md_size": 0, 00:20:35.638 "dif_type": 0, 00:20:35.638 "dif_is_head_of_md": false, 00:20:35.638 "dif_pi_format": 0 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "bdev_wait_for_examine" 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "nbd", 00:20:35.638 "config": [] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "scheduler", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "framework_set_scheduler", 00:20:35.638 "params": { 00:20:35.638 "name": "static" 00:20:35.638 } 00:20:35.638 } 00:20:35.638 ] 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "subsystem": "nvmf", 00:20:35.638 "config": [ 00:20:35.638 { 00:20:35.638 "method": "nvmf_set_config", 00:20:35.638 "params": { 00:20:35.638 "discovery_filter": "match_any", 00:20:35.638 "admin_cmd_passthru": { 00:20:35.638 "identify_ctrlr": false 00:20:35.638 }, 00:20:35.638 "dhchap_digests": [ 00:20:35.638 "sha256", 00:20:35.638 "sha384", 00:20:35.638 "sha512" 00:20:35.638 ], 00:20:35.638 "dhchap_dhgroups": [ 00:20:35.638 "null", 00:20:35.638 "ffdhe2048", 00:20:35.638 "ffdhe3072", 00:20:35.638 "ffdhe4096", 00:20:35.638 "ffdhe6144", 00:20:35.638 "ffdhe8192" 00:20:35.638 ] 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_set_max_subsystems", 00:20:35.638 "params": { 00:20:35.638 "max_subsystems": 1024 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_set_crdt", 00:20:35.638 "params": { 00:20:35.638 "crdt1": 0, 00:20:35.638 "crdt2": 0, 00:20:35.638 "crdt3": 0 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_create_transport", 00:20:35.638 "params": { 00:20:35.638 "trtype": "TCP", 00:20:35.638 "max_queue_depth": 128, 00:20:35.638 "max_io_qpairs_per_ctrlr": 127, 00:20:35.638 "in_capsule_data_size": 4096, 00:20:35.638 "max_io_size": 131072, 00:20:35.638 "io_unit_size": 131072, 00:20:35.638 "max_aq_depth": 128, 00:20:35.638 "num_shared_buffers": 511, 00:20:35.638 "buf_cache_size": 4294967295, 00:20:35.638 "dif_insert_or_strip": false, 00:20:35.638 "zcopy": false, 00:20:35.638 "c2h_success": false, 00:20:35.638 "sock_priority": 0, 00:20:35.638 "abort_timeout_sec": 1, 00:20:35.638 "ack_timeout": 0, 00:20:35.638 "data_wr_pool_size": 0 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_create_subsystem", 00:20:35.638 "params": { 00:20:35.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.638 "allow_any_host": false, 00:20:35.638 "serial_number": "00000000000000000000", 00:20:35.638 "model_number": "SPDK bdev Controller", 00:20:35.638 "max_namespaces": 32, 00:20:35.638 "min_cntlid": 1, 00:20:35.638 "max_cntlid": 65519, 00:20:35.638 "ana_reporting": false 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_subsystem_add_host", 00:20:35.638 "params": { 00:20:35.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.638 "host": "nqn.2016-06.io.spdk:host1", 00:20:35.638 "psk": "key0" 00:20:35.638 } 00:20:35.638 }, 00:20:35.638 { 00:20:35.638 "method": "nvmf_subsystem_add_ns", 00:20:35.638 "params": { 00:20:35.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.638 "namespace": { 00:20:35.638 "nsid": 1, 00:20:35.638 "bdev_name": "malloc0", 00:20:35.638 "nguid": "B5AFF8B655B542B7B564B81039420A23", 00:20:35.639 "uuid": "b5aff8b6-55b5-42b7-b564-b81039420a23", 00:20:35.639 "no_auto_visible": false 00:20:35.639 } 00:20:35.639 } 00:20:35.639 }, 00:20:35.639 { 00:20:35.639 "method": "nvmf_subsystem_add_listener", 00:20:35.639 "params": { 00:20:35.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.639 "listen_address": { 00:20:35.639 "trtype": "TCP", 00:20:35.639 "adrfam": "IPv4", 00:20:35.639 "traddr": "10.0.0.2", 00:20:35.639 "trsvcid": "4420" 00:20:35.639 }, 00:20:35.639 "secure_channel": false, 00:20:35.639 "sock_impl": "ssl" 00:20:35.639 } 00:20:35.639 } 00:20:35.639 ] 00:20:35.639 } 00:20:35.639 ] 00:20:35.639 }' 00:20:35.639 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:35.899 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:35.899 "subsystems": [ 00:20:35.899 { 00:20:35.899 "subsystem": "keyring", 00:20:35.899 "config": [ 00:20:35.899 { 00:20:35.899 "method": "keyring_file_add_key", 00:20:35.899 "params": { 00:20:35.899 "name": "key0", 00:20:35.899 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:35.899 } 00:20:35.899 } 00:20:35.899 ] 00:20:35.899 }, 00:20:35.899 { 00:20:35.899 "subsystem": "iobuf", 00:20:35.900 "config": [ 00:20:35.900 { 00:20:35.900 "method": "iobuf_set_options", 00:20:35.900 "params": { 00:20:35.900 "small_pool_count": 8192, 00:20:35.900 "large_pool_count": 1024, 00:20:35.900 "small_bufsize": 8192, 00:20:35.900 "large_bufsize": 135168, 00:20:35.900 "enable_numa": false 00:20:35.900 } 00:20:35.900 } 00:20:35.900 ] 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "subsystem": "sock", 00:20:35.900 "config": [ 00:20:35.900 { 00:20:35.900 "method": "sock_set_default_impl", 00:20:35.900 "params": { 00:20:35.900 "impl_name": "posix" 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "sock_impl_set_options", 00:20:35.900 "params": { 00:20:35.900 "impl_name": "ssl", 00:20:35.900 "recv_buf_size": 4096, 00:20:35.900 "send_buf_size": 4096, 00:20:35.900 "enable_recv_pipe": true, 00:20:35.900 "enable_quickack": false, 00:20:35.900 "enable_placement_id": 0, 00:20:35.900 "enable_zerocopy_send_server": true, 00:20:35.900 "enable_zerocopy_send_client": false, 00:20:35.900 "zerocopy_threshold": 0, 00:20:35.900 "tls_version": 0, 00:20:35.900 "enable_ktls": false 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "sock_impl_set_options", 00:20:35.900 "params": { 00:20:35.900 "impl_name": "posix", 00:20:35.900 "recv_buf_size": 2097152, 00:20:35.900 "send_buf_size": 2097152, 00:20:35.900 "enable_recv_pipe": true, 00:20:35.900 "enable_quickack": false, 00:20:35.900 "enable_placement_id": 0, 00:20:35.900 "enable_zerocopy_send_server": true, 00:20:35.900 "enable_zerocopy_send_client": false, 00:20:35.900 "zerocopy_threshold": 0, 00:20:35.900 "tls_version": 0, 00:20:35.900 "enable_ktls": false 00:20:35.900 } 00:20:35.900 } 00:20:35.900 ] 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "subsystem": "vmd", 00:20:35.900 "config": [] 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "subsystem": "accel", 00:20:35.900 "config": [ 00:20:35.900 { 00:20:35.900 "method": "accel_set_options", 00:20:35.900 "params": { 00:20:35.900 "small_cache_size": 128, 00:20:35.900 "large_cache_size": 16, 00:20:35.900 "task_count": 2048, 00:20:35.900 "sequence_count": 2048, 00:20:35.900 "buf_count": 2048 00:20:35.900 } 00:20:35.900 } 00:20:35.900 ] 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "subsystem": "bdev", 00:20:35.900 "config": [ 00:20:35.900 { 00:20:35.900 "method": "bdev_set_options", 00:20:35.900 "params": { 00:20:35.900 "bdev_io_pool_size": 65535, 00:20:35.900 "bdev_io_cache_size": 256, 00:20:35.900 "bdev_auto_examine": true, 00:20:35.900 "iobuf_small_cache_size": 128, 00:20:35.900 "iobuf_large_cache_size": 16 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_raid_set_options", 00:20:35.900 "params": { 00:20:35.900 "process_window_size_kb": 1024, 00:20:35.900 "process_max_bandwidth_mb_sec": 0 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_iscsi_set_options", 00:20:35.900 "params": { 00:20:35.900 "timeout_sec": 30 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_nvme_set_options", 00:20:35.900 "params": { 00:20:35.900 "action_on_timeout": "none", 00:20:35.900 "timeout_us": 0, 00:20:35.900 "timeout_admin_us": 0, 00:20:35.900 "keep_alive_timeout_ms": 10000, 00:20:35.900 "arbitration_burst": 0, 00:20:35.900 "low_priority_weight": 0, 00:20:35.900 "medium_priority_weight": 0, 00:20:35.900 "high_priority_weight": 0, 00:20:35.900 "nvme_adminq_poll_period_us": 10000, 00:20:35.900 "nvme_ioq_poll_period_us": 0, 00:20:35.900 "io_queue_requests": 512, 00:20:35.900 "delay_cmd_submit": true, 00:20:35.900 "transport_retry_count": 4, 00:20:35.900 "bdev_retry_count": 3, 00:20:35.900 "transport_ack_timeout": 0, 00:20:35.900 "ctrlr_loss_timeout_sec": 0, 00:20:35.900 "reconnect_delay_sec": 0, 00:20:35.900 "fast_io_fail_timeout_sec": 0, 00:20:35.900 "disable_auto_failback": false, 00:20:35.900 "generate_uuids": false, 00:20:35.900 "transport_tos": 0, 00:20:35.900 "nvme_error_stat": false, 00:20:35.900 "rdma_srq_size": 0, 00:20:35.900 "io_path_stat": false, 00:20:35.900 "allow_accel_sequence": false, 00:20:35.900 "rdma_max_cq_size": 0, 00:20:35.900 "rdma_cm_event_timeout_ms": 0, 00:20:35.900 "dhchap_digests": [ 00:20:35.900 "sha256", 00:20:35.900 "sha384", 00:20:35.900 "sha512" 00:20:35.900 ], 00:20:35.900 "dhchap_dhgroups": [ 00:20:35.900 "null", 00:20:35.900 "ffdhe2048", 00:20:35.900 "ffdhe3072", 00:20:35.900 "ffdhe4096", 00:20:35.900 "ffdhe6144", 00:20:35.900 "ffdhe8192" 00:20:35.900 ] 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_nvme_attach_controller", 00:20:35.900 "params": { 00:20:35.900 "name": "nvme0", 00:20:35.900 "trtype": "TCP", 00:20:35.900 "adrfam": "IPv4", 00:20:35.900 "traddr": "10.0.0.2", 00:20:35.900 "trsvcid": "4420", 00:20:35.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.900 "prchk_reftag": false, 00:20:35.900 "prchk_guard": false, 00:20:35.900 "ctrlr_loss_timeout_sec": 0, 00:20:35.900 "reconnect_delay_sec": 0, 00:20:35.900 "fast_io_fail_timeout_sec": 0, 00:20:35.900 "psk": "key0", 00:20:35.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.900 "hdgst": false, 00:20:35.900 "ddgst": false, 00:20:35.900 "multipath": "multipath" 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_nvme_set_hotplug", 00:20:35.900 "params": { 00:20:35.900 "period_us": 100000, 00:20:35.900 "enable": false 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_enable_histogram", 00:20:35.900 "params": { 00:20:35.900 "name": "nvme0n1", 00:20:35.900 "enable": true 00:20:35.900 } 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "method": "bdev_wait_for_examine" 00:20:35.900 } 00:20:35.900 ] 00:20:35.900 }, 00:20:35.900 { 00:20:35.900 "subsystem": "nbd", 00:20:35.900 "config": [] 00:20:35.900 } 00:20:35.900 ] 00:20:35.900 }' 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3682586 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3682586 ']' 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3682586 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682586 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682586' 00:20:35.900 killing process with pid 3682586 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3682586 00:20:35.900 Received shutdown signal, test time was about 1.000000 seconds 00:20:35.900 00:20:35.900 Latency(us) 00:20:35.900 [2024-11-26T18:58:36.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.900 [2024-11-26T18:58:36.721Z] =================================================================================================================== 00:20:35.900 [2024-11-26T18:58:36.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.900 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3682586 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3682273 ']' 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3682273' 00:20:36.161 killing process with pid 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3682273 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.161 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:36.161 "subsystems": [ 00:20:36.161 { 00:20:36.161 "subsystem": "keyring", 00:20:36.161 "config": [ 00:20:36.161 { 00:20:36.162 "method": "keyring_file_add_key", 00:20:36.162 "params": { 00:20:36.162 "name": "key0", 00:20:36.162 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:36.162 } 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "iobuf", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "iobuf_set_options", 00:20:36.162 "params": { 00:20:36.162 "small_pool_count": 8192, 00:20:36.162 "large_pool_count": 1024, 00:20:36.162 "small_bufsize": 8192, 00:20:36.162 "large_bufsize": 135168, 00:20:36.162 "enable_numa": false 00:20:36.162 } 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "sock", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "sock_set_default_impl", 00:20:36.162 "params": { 00:20:36.162 "impl_name": "posix" 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "sock_impl_set_options", 00:20:36.162 "params": { 00:20:36.162 "impl_name": "ssl", 00:20:36.162 "recv_buf_size": 4096, 00:20:36.162 "send_buf_size": 4096, 00:20:36.162 "enable_recv_pipe": true, 00:20:36.162 "enable_quickack": false, 00:20:36.162 "enable_placement_id": 0, 00:20:36.162 "enable_zerocopy_send_server": true, 00:20:36.162 "enable_zerocopy_send_client": false, 00:20:36.162 "zerocopy_threshold": 0, 00:20:36.162 "tls_version": 0, 00:20:36.162 "enable_ktls": false 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "sock_impl_set_options", 00:20:36.162 "params": { 00:20:36.162 "impl_name": "posix", 00:20:36.162 "recv_buf_size": 2097152, 00:20:36.162 "send_buf_size": 2097152, 00:20:36.162 "enable_recv_pipe": true, 00:20:36.162 "enable_quickack": false, 00:20:36.162 "enable_placement_id": 0, 00:20:36.162 "enable_zerocopy_send_server": true, 00:20:36.162 "enable_zerocopy_send_client": false, 00:20:36.162 "zerocopy_threshold": 0, 00:20:36.162 "tls_version": 0, 00:20:36.162 "enable_ktls": false 00:20:36.162 } 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "vmd", 00:20:36.162 "config": [] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "accel", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "accel_set_options", 00:20:36.162 "params": { 00:20:36.162 "small_cache_size": 128, 00:20:36.162 "large_cache_size": 16, 00:20:36.162 "task_count": 2048, 00:20:36.162 "sequence_count": 2048, 00:20:36.162 "buf_count": 2048 00:20:36.162 } 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "bdev", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "bdev_set_options", 00:20:36.162 "params": { 00:20:36.162 "bdev_io_pool_size": 65535, 00:20:36.162 "bdev_io_cache_size": 256, 00:20:36.162 "bdev_auto_examine": true, 00:20:36.162 "iobuf_small_cache_size": 128, 00:20:36.162 "iobuf_large_cache_size": 16 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_raid_set_options", 00:20:36.162 "params": { 00:20:36.162 "process_window_size_kb": 1024, 00:20:36.162 "process_max_bandwidth_mb_sec": 0 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_iscsi_set_options", 00:20:36.162 "params": { 00:20:36.162 "timeout_sec": 30 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_nvme_set_options", 00:20:36.162 "params": { 00:20:36.162 "action_on_timeout": "none", 00:20:36.162 "timeout_us": 0, 00:20:36.162 "timeout_admin_us": 0, 00:20:36.162 "keep_alive_timeout_ms": 10000, 00:20:36.162 "arbitration_burst": 0, 00:20:36.162 "low_priority_weight": 0, 00:20:36.162 "medium_priority_weight": 0, 00:20:36.162 "high_priority_weight": 0, 00:20:36.162 "nvme_adminq_poll_period_us": 10000, 00:20:36.162 "nvme_ioq_poll_period_us": 0, 00:20:36.162 "io_queue_requests": 0, 00:20:36.162 "delay_cmd_submit": true, 00:20:36.162 "transport_retry_count": 4, 00:20:36.162 "bdev_retry_count": 3, 00:20:36.162 "transport_ack_timeout": 0, 00:20:36.162 "ctrlr_loss_timeout_sec": 0, 00:20:36.162 "reconnect_delay_sec": 0, 00:20:36.162 "fast_io_fail_timeout_sec": 0, 00:20:36.162 "disable_auto_failback": false, 00:20:36.162 "generate_uuids": false, 00:20:36.162 "transport_tos": 0, 00:20:36.162 "nvme_error_stat": false, 00:20:36.162 "rdma_srq_size": 0, 00:20:36.162 "io_path_stat": false, 00:20:36.162 "allow_accel_sequence": false, 00:20:36.162 "rdma_max_cq_size": 0, 00:20:36.162 "rdma_cm_event_timeout_ms": 0, 00:20:36.162 "dhchap_digests": [ 00:20:36.162 "sha256", 00:20:36.162 "sha384", 00:20:36.162 "sha512" 00:20:36.162 ], 00:20:36.162 "dhchap_dhgroups": [ 00:20:36.162 "null", 00:20:36.162 "ffdhe2048", 00:20:36.162 "ffdhe3072", 00:20:36.162 "ffdhe4096", 00:20:36.162 "ffdhe6144", 00:20:36.162 "ffdhe8192" 00:20:36.162 ] 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_nvme_set_hotplug", 00:20:36.162 "params": { 00:20:36.162 "period_us": 100000, 00:20:36.162 "enable": false 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_malloc_create", 00:20:36.162 "params": { 00:20:36.162 "name": "malloc0", 00:20:36.162 "num_blocks": 8192, 00:20:36.162 "block_size": 4096, 00:20:36.162 "physical_block_size": 4096, 00:20:36.162 "uuid": "b5aff8b6-55b5-42b7-b564-b81039420a23", 00:20:36.162 "optimal_io_boundary": 0, 00:20:36.162 "md_size": 0, 00:20:36.162 "dif_type": 0, 00:20:36.162 "dif_is_head_of_md": false, 00:20:36.162 "dif_pi_format": 0 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "bdev_wait_for_examine" 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "nbd", 00:20:36.162 "config": [] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "scheduler", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "framework_set_scheduler", 00:20:36.162 "params": { 00:20:36.162 "name": "static" 00:20:36.162 } 00:20:36.162 } 00:20:36.162 ] 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "subsystem": "nvmf", 00:20:36.162 "config": [ 00:20:36.162 { 00:20:36.162 "method": "nvmf_set_config", 00:20:36.162 "params": { 00:20:36.162 "discovery_filter": "match_any", 00:20:36.162 "admin_cmd_passthru": { 00:20:36.162 "identify_ctrlr": false 00:20:36.162 }, 00:20:36.162 "dhchap_digests": [ 00:20:36.162 "sha256", 00:20:36.162 "sha384", 00:20:36.162 "sha512" 00:20:36.162 ], 00:20:36.162 "dhchap_dhgroups": [ 00:20:36.162 "null", 00:20:36.162 "ffdhe2048", 00:20:36.162 "ffdhe3072", 00:20:36.162 "ffdhe4096", 00:20:36.162 "ffdhe6144", 00:20:36.162 "ffdhe8192" 00:20:36.162 ] 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_set_max_subsystems", 00:20:36.162 "params": { 00:20:36.162 "max_subsystems": 1024 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_set_crdt", 00:20:36.162 "params": { 00:20:36.162 "crdt1": 0, 00:20:36.162 "crdt2": 0, 00:20:36.162 "crdt3": 0 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_create_transport", 00:20:36.162 "params": { 00:20:36.162 "trtype": "TCP", 00:20:36.162 "max_queue_depth": 128, 00:20:36.162 "max_io_qpairs_per_ctrlr": 127, 00:20:36.162 "in_capsule_data_size": 4096, 00:20:36.162 "max_io_size": 131072, 00:20:36.162 "io_unit_size": 131072, 00:20:36.162 "max_aq_depth": 128, 00:20:36.162 "num_shared_buffers": 511, 00:20:36.162 "buf_cache_size": 4294967295, 00:20:36.162 "dif_insert_or_strip": false, 00:20:36.162 "zcopy": false, 00:20:36.162 "c2h_success": false, 00:20:36.162 "sock_priority": 0, 00:20:36.162 "abort_timeout_sec": 1, 00:20:36.162 "ack_timeout": 0, 00:20:36.162 "data_wr_pool_size": 0 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_create_subsystem", 00:20:36.162 "params": { 00:20:36.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.162 "allow_any_host": false, 00:20:36.162 "serial_number": "00000000000000000000", 00:20:36.162 "model_number": "SPDK bdev Controller", 00:20:36.162 "max_namespaces": 32, 00:20:36.162 "min_cntlid": 1, 00:20:36.162 "max_cntlid": 65519, 00:20:36.162 "ana_reporting": false 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_subsystem_add_host", 00:20:36.162 "params": { 00:20:36.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.162 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.162 "psk": "key0" 00:20:36.162 } 00:20:36.162 }, 00:20:36.162 { 00:20:36.162 "method": "nvmf_subsystem_add_ns", 00:20:36.162 "params": { 00:20:36.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.162 "namespace": { 00:20:36.162 "nsid": 1, 00:20:36.162 "bdev_name": "malloc0", 00:20:36.162 "nguid": "B5AFF8B655B542B7B564B81039420A23", 00:20:36.162 "uuid": "b5aff8b6-55b5-42b7-b564-b81039420a23", 00:20:36.162 "no_auto_visible": false 00:20:36.162 } 00:20:36.162 } 00:20:36.163 }, 00:20:36.163 { 00:20:36.163 "method": "nvmf_subsystem_add_listener", 00:20:36.163 "params": { 00:20:36.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.163 "listen_address": { 00:20:36.163 "trtype": "TCP", 00:20:36.163 "adrfam": "IPv4", 00:20:36.163 "traddr": "10.0.0.2", 00:20:36.163 "trsvcid": "4420" 00:20:36.163 }, 00:20:36.163 "secure_channel": false, 00:20:36.163 "sock_impl": "ssl" 00:20:36.163 } 00:20:36.163 } 00:20:36.163 ] 00:20:36.163 } 00:20:36.163 ] 00:20:36.163 }' 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3683260 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3683260 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3683260 ']' 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.163 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.163 [2024-11-26 19:58:36.973931] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:36.163 [2024-11-26 19:58:36.973989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.423 [2024-11-26 19:58:37.063811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.423 [2024-11-26 19:58:37.092671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.423 [2024-11-26 19:58:37.092699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.423 [2024-11-26 19:58:37.092705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.423 [2024-11-26 19:58:37.092709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.423 [2024-11-26 19:58:37.092714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.423 [2024-11-26 19:58:37.093186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.683 [2024-11-26 19:58:37.287533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.683 [2024-11-26 19:58:37.319567] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.683 [2024-11-26 19:58:37.319783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.943 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.943 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.943 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.943 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.943 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3683303 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3683303 /var/tmp/bdevperf.sock 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3683303 ']' 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.203 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:37.203 "subsystems": [ 00:20:37.203 { 00:20:37.203 "subsystem": "keyring", 00:20:37.203 "config": [ 00:20:37.203 { 00:20:37.203 "method": "keyring_file_add_key", 00:20:37.203 "params": { 00:20:37.203 "name": "key0", 00:20:37.203 "path": "/tmp/tmp.YQc1xwwcMF" 00:20:37.203 } 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "iobuf", 00:20:37.203 "config": [ 00:20:37.203 { 00:20:37.203 "method": "iobuf_set_options", 00:20:37.203 "params": { 00:20:37.203 "small_pool_count": 8192, 00:20:37.203 "large_pool_count": 1024, 00:20:37.203 "small_bufsize": 8192, 00:20:37.203 "large_bufsize": 135168, 00:20:37.203 "enable_numa": false 00:20:37.203 } 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "sock", 00:20:37.203 "config": [ 00:20:37.203 { 00:20:37.203 "method": "sock_set_default_impl", 00:20:37.203 "params": { 00:20:37.203 "impl_name": "posix" 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "sock_impl_set_options", 00:20:37.203 "params": { 00:20:37.203 "impl_name": "ssl", 00:20:37.203 "recv_buf_size": 4096, 00:20:37.203 "send_buf_size": 4096, 00:20:37.203 "enable_recv_pipe": true, 00:20:37.203 "enable_quickack": false, 00:20:37.203 "enable_placement_id": 0, 00:20:37.203 "enable_zerocopy_send_server": true, 00:20:37.203 "enable_zerocopy_send_client": false, 00:20:37.203 "zerocopy_threshold": 0, 00:20:37.203 "tls_version": 0, 00:20:37.203 "enable_ktls": false 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "sock_impl_set_options", 00:20:37.203 "params": { 00:20:37.203 "impl_name": "posix", 00:20:37.203 "recv_buf_size": 2097152, 00:20:37.203 "send_buf_size": 2097152, 00:20:37.203 "enable_recv_pipe": true, 00:20:37.203 "enable_quickack": false, 00:20:37.203 "enable_placement_id": 0, 00:20:37.203 "enable_zerocopy_send_server": true, 00:20:37.203 "enable_zerocopy_send_client": false, 00:20:37.203 "zerocopy_threshold": 0, 00:20:37.203 "tls_version": 0, 00:20:37.203 "enable_ktls": false 00:20:37.203 } 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "vmd", 00:20:37.203 "config": [] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "accel", 00:20:37.203 "config": [ 00:20:37.203 { 00:20:37.203 "method": "accel_set_options", 00:20:37.203 "params": { 00:20:37.203 "small_cache_size": 128, 00:20:37.203 "large_cache_size": 16, 00:20:37.203 "task_count": 2048, 00:20:37.203 "sequence_count": 2048, 00:20:37.203 "buf_count": 2048 00:20:37.203 } 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "bdev", 00:20:37.203 "config": [ 00:20:37.203 { 00:20:37.203 "method": "bdev_set_options", 00:20:37.203 "params": { 00:20:37.203 "bdev_io_pool_size": 65535, 00:20:37.203 "bdev_io_cache_size": 256, 00:20:37.203 "bdev_auto_examine": true, 00:20:37.203 "iobuf_small_cache_size": 128, 00:20:37.203 "iobuf_large_cache_size": 16 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_raid_set_options", 00:20:37.203 "params": { 00:20:37.203 "process_window_size_kb": 1024, 00:20:37.203 "process_max_bandwidth_mb_sec": 0 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_iscsi_set_options", 00:20:37.203 "params": { 00:20:37.203 "timeout_sec": 30 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_nvme_set_options", 00:20:37.203 "params": { 00:20:37.203 "action_on_timeout": "none", 00:20:37.203 "timeout_us": 0, 00:20:37.203 "timeout_admin_us": 0, 00:20:37.203 "keep_alive_timeout_ms": 10000, 00:20:37.203 "arbitration_burst": 0, 00:20:37.203 "low_priority_weight": 0, 00:20:37.203 "medium_priority_weight": 0, 00:20:37.203 "high_priority_weight": 0, 00:20:37.203 "nvme_adminq_poll_period_us": 10000, 00:20:37.203 "nvme_ioq_poll_period_us": 0, 00:20:37.203 "io_queue_requests": 512, 00:20:37.203 "delay_cmd_submit": true, 00:20:37.203 "transport_retry_count": 4, 00:20:37.203 "bdev_retry_count": 3, 00:20:37.203 "transport_ack_timeout": 0, 00:20:37.203 "ctrlr_loss_timeout_sec": 0, 00:20:37.203 "reconnect_delay_sec": 0, 00:20:37.203 "fast_io_fail_timeout_sec": 0, 00:20:37.203 "disable_auto_failback": false, 00:20:37.203 "generate_uuids": false, 00:20:37.203 "transport_tos": 0, 00:20:37.203 "nvme_error_stat": false, 00:20:37.203 "rdma_srq_size": 0, 00:20:37.203 "io_path_stat": false, 00:20:37.203 "allow_accel_sequence": false, 00:20:37.203 "rdma_max_cq_size": 0, 00:20:37.203 "rdma_cm_event_timeout_ms": 0, 00:20:37.203 "dhchap_digests": [ 00:20:37.203 "sha256", 00:20:37.203 "sha384", 00:20:37.203 "sha512" 00:20:37.203 ], 00:20:37.203 "dhchap_dhgroups": [ 00:20:37.203 "null", 00:20:37.203 "ffdhe2048", 00:20:37.203 "ffdhe3072", 00:20:37.203 "ffdhe4096", 00:20:37.203 "ffdhe6144", 00:20:37.203 "ffdhe8192" 00:20:37.203 ] 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_nvme_attach_controller", 00:20:37.203 "params": { 00:20:37.203 "name": "nvme0", 00:20:37.203 "trtype": "TCP", 00:20:37.203 "adrfam": "IPv4", 00:20:37.203 "traddr": "10.0.0.2", 00:20:37.203 "trsvcid": "4420", 00:20:37.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.203 "prchk_reftag": false, 00:20:37.203 "prchk_guard": false, 00:20:37.203 "ctrlr_loss_timeout_sec": 0, 00:20:37.203 "reconnect_delay_sec": 0, 00:20:37.203 "fast_io_fail_timeout_sec": 0, 00:20:37.203 "psk": "key0", 00:20:37.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.203 "hdgst": false, 00:20:37.203 "ddgst": false, 00:20:37.203 "multipath": "multipath" 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_nvme_set_hotplug", 00:20:37.203 "params": { 00:20:37.203 "period_us": 100000, 00:20:37.203 "enable": false 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_enable_histogram", 00:20:37.203 "params": { 00:20:37.203 "name": "nvme0n1", 00:20:37.203 "enable": true 00:20:37.203 } 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "method": "bdev_wait_for_examine" 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }, 00:20:37.203 { 00:20:37.203 "subsystem": "nbd", 00:20:37.203 "config": [] 00:20:37.203 } 00:20:37.203 ] 00:20:37.203 }' 00:20:37.203 [2024-11-26 19:58:37.852666] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:37.204 [2024-11-26 19:58:37.852720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683303 ] 00:20:37.204 [2024-11-26 19:58:37.934323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.204 [2024-11-26 19:58:37.964151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.464 [2024-11-26 19:58:38.100486] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.035 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.295 Running I/O for 1 seconds... 00:20:39.234 6048.00 IOPS, 23.62 MiB/s 00:20:39.234 Latency(us) 00:20:39.234 [2024-11-26T18:58:40.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.234 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:39.234 Verification LBA range: start 0x0 length 0x2000 00:20:39.234 nvme0n1 : 1.02 6076.25 23.74 0.00 0.00 20895.95 6116.69 23592.96 00:20:39.234 [2024-11-26T18:58:40.055Z] =================================================================================================================== 00:20:39.234 [2024-11-26T18:58:40.055Z] Total : 6076.25 23.74 0.00 0.00 20895.95 6116.69 23592.96 00:20:39.234 { 00:20:39.234 "results": [ 00:20:39.234 { 00:20:39.234 "job": "nvme0n1", 00:20:39.234 "core_mask": "0x2", 00:20:39.234 "workload": "verify", 00:20:39.234 "status": "finished", 00:20:39.234 "verify_range": { 00:20:39.234 "start": 0, 00:20:39.234 "length": 8192 00:20:39.234 }, 00:20:39.234 "queue_depth": 128, 00:20:39.234 "io_size": 4096, 00:20:39.234 "runtime": 1.016417, 00:20:39.234 "iops": 6076.246265066405, 00:20:39.234 "mibps": 23.735336972915643, 00:20:39.234 "io_failed": 0, 00:20:39.234 "io_timeout": 0, 00:20:39.234 "avg_latency_us": 20895.951364421417, 00:20:39.234 "min_latency_us": 6116.693333333334, 00:20:39.234 "max_latency_us": 23592.96 00:20:39.234 } 00:20:39.234 ], 00:20:39.234 "core_count": 1 00:20:39.234 } 00:20:39.234 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:39.234 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:39.234 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:39.234 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:39.234 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:39.235 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:39.235 nvmf_trace.0 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3683303 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3683303 ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3683303 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683303 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683303' 00:20:39.496 killing process with pid 3683303 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3683303 00:20:39.496 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.496 00:20:39.496 Latency(us) 00:20:39.496 [2024-11-26T18:58:40.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.496 [2024-11-26T18:58:40.317Z] =================================================================================================================== 00:20:39.496 [2024-11-26T18:58:40.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3683303 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.496 rmmod nvme_tcp 00:20:39.496 rmmod nvme_fabrics 00:20:39.496 rmmod nvme_keyring 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3683260 ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3683260 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3683260 ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3683260 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.496 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3683260 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3683260' 00:20:39.758 killing process with pid 3683260 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3683260 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3683260 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.758 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TxxQW2Q7kP /tmp/tmp.Hw6BfNHLdp /tmp/tmp.YQc1xwwcMF 00:20:42.305 00:20:42.305 real 1m28.156s 00:20:42.305 user 2m19.353s 00:20:42.305 sys 0m27.174s 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.305 ************************************ 00:20:42.305 END TEST nvmf_tls 00:20:42.305 ************************************ 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.305 ************************************ 00:20:42.305 START TEST nvmf_fips 00:20:42.305 ************************************ 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:42.305 * Looking for test storage... 00:20:42.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.305 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.306 --rc genhtml_branch_coverage=1 00:20:42.306 --rc genhtml_function_coverage=1 00:20:42.306 --rc genhtml_legend=1 00:20:42.306 --rc geninfo_all_blocks=1 00:20:42.306 --rc geninfo_unexecuted_blocks=1 00:20:42.306 00:20:42.306 ' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.306 --rc genhtml_branch_coverage=1 00:20:42.306 --rc genhtml_function_coverage=1 00:20:42.306 --rc genhtml_legend=1 00:20:42.306 --rc geninfo_all_blocks=1 00:20:42.306 --rc geninfo_unexecuted_blocks=1 00:20:42.306 00:20:42.306 ' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.306 --rc genhtml_branch_coverage=1 00:20:42.306 --rc genhtml_function_coverage=1 00:20:42.306 --rc genhtml_legend=1 00:20:42.306 --rc geninfo_all_blocks=1 00:20:42.306 --rc geninfo_unexecuted_blocks=1 00:20:42.306 00:20:42.306 ' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.306 --rc genhtml_branch_coverage=1 00:20:42.306 --rc genhtml_function_coverage=1 00:20:42.306 --rc genhtml_legend=1 00:20:42.306 --rc geninfo_all_blocks=1 00:20:42.306 --rc geninfo_unexecuted_blocks=1 00:20:42.306 00:20:42.306 ' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:42.306 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:42.307 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:42.307 Error setting digest 00:20:42.307 409269782F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:42.307 409269782F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.307 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:50.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:50.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:50.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:50.445 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.445 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:20:50.446 00:20:50.446 --- 10.0.0.2 ping statistics --- 00:20:50.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.446 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:20:50.446 00:20:50.446 --- 10.0.0.1 ping statistics --- 00:20:50.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.446 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3688048 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3688048 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3688048 ']' 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.446 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.446 [2024-11-26 19:58:50.687050] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:50.446 [2024-11-26 19:58:50.687125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.446 [2024-11-26 19:58:50.786070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.446 [2024-11-26 19:58:50.837488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.446 [2024-11-26 19:58:50.837534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.446 [2024-11-26 19:58:50.837542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.446 [2024-11-26 19:58:50.837549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.446 [2024-11-26 19:58:50.837555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.446 [2024-11-26 19:58:50.838284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.707 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.GpY 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.GpY 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.GpY 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.GpY 00:20:50.967 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.967 [2024-11-26 19:58:51.698134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.967 [2024-11-26 19:58:51.714130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.967 [2024-11-26 19:58:51.714456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.967 malloc0 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3688357 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3688357 /var/tmp/bdevperf.sock 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3688357 ']' 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.228 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:51.228 [2024-11-26 19:58:51.856554] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:20:51.228 [2024-11-26 19:58:51.856629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688357 ] 00:20:51.228 [2024-11-26 19:58:51.948785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.228 [2024-11-26 19:58:51.999563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.169 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.169 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:52.169 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.GpY 00:20:52.169 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.429 [2024-11-26 19:58:53.039168] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.429 TLSTESTn1 00:20:52.429 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.429 Running I/O for 10 seconds... 00:20:54.751 5941.00 IOPS, 23.21 MiB/s [2024-11-26T18:58:56.572Z] 5519.50 IOPS, 21.56 MiB/s [2024-11-26T18:58:57.555Z] 5288.67 IOPS, 20.66 MiB/s [2024-11-26T18:58:58.492Z] 5450.00 IOPS, 21.29 MiB/s [2024-11-26T18:58:59.432Z] 5516.00 IOPS, 21.55 MiB/s [2024-11-26T18:59:00.374Z] 5584.33 IOPS, 21.81 MiB/s [2024-11-26T18:59:01.315Z] 5624.57 IOPS, 21.97 MiB/s [2024-11-26T18:59:02.258Z] 5669.88 IOPS, 22.15 MiB/s [2024-11-26T18:59:03.640Z] 5687.67 IOPS, 22.22 MiB/s [2024-11-26T18:59:03.640Z] 5703.90 IOPS, 22.28 MiB/s 00:21:02.819 Latency(us) 00:21:02.819 [2024-11-26T18:59:03.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.820 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.820 Verification LBA range: start 0x0 length 0x2000 00:21:02.820 TLSTESTn1 : 10.01 5708.73 22.30 0.00 0.00 22388.00 6007.47 36263.25 00:21:02.820 [2024-11-26T18:59:03.641Z] =================================================================================================================== 00:21:02.820 [2024-11-26T18:59:03.641Z] Total : 5708.73 22.30 0.00 0.00 22388.00 6007.47 36263.25 00:21:02.820 { 00:21:02.820 "results": [ 00:21:02.820 { 00:21:02.820 "job": "TLSTESTn1", 00:21:02.820 "core_mask": "0x4", 00:21:02.820 "workload": "verify", 00:21:02.820 "status": "finished", 00:21:02.820 "verify_range": { 00:21:02.820 "start": 0, 00:21:02.820 "length": 8192 00:21:02.820 }, 00:21:02.820 "queue_depth": 128, 00:21:02.820 "io_size": 4096, 00:21:02.820 "runtime": 10.013961, 00:21:02.820 "iops": 5708.73004198838, 00:21:02.820 "mibps": 22.29972672651711, 00:21:02.820 "io_failed": 0, 00:21:02.820 "io_timeout": 0, 00:21:02.820 "avg_latency_us": 22388.00335764806, 00:21:02.820 "min_latency_us": 6007.466666666666, 00:21:02.820 "max_latency_us": 36263.253333333334 00:21:02.820 } 00:21:02.820 ], 00:21:02.820 "core_count": 1 00:21:02.820 } 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:02.820 nvmf_trace.0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3688357 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3688357 ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3688357 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3688357 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3688357' 00:21:02.820 killing process with pid 3688357 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3688357 00:21:02.820 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.820 00:21:02.820 Latency(us) 00:21:02.820 [2024-11-26T18:59:03.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.820 [2024-11-26T18:59:03.641Z] =================================================================================================================== 00:21:02.820 [2024-11-26T18:59:03.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3688357 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.820 rmmod nvme_tcp 00:21:02.820 rmmod nvme_fabrics 00:21:02.820 rmmod nvme_keyring 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3688048 ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3688048 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3688048 ']' 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3688048 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:02.820 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3688048 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3688048' 00:21:03.080 killing process with pid 3688048 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3688048 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3688048 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.080 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.GpY 00:21:05.623 00:21:05.623 real 0m23.254s 00:21:05.623 user 0m24.466s 00:21:05.623 sys 0m10.150s 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:05.623 ************************************ 00:21:05.623 END TEST nvmf_fips 00:21:05.623 ************************************ 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.623 ************************************ 00:21:05.623 START TEST nvmf_control_msg_list 00:21:05.623 ************************************ 00:21:05.623 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:05.623 * Looking for test storage... 00:21:05.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:05.623 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.624 --rc genhtml_branch_coverage=1 00:21:05.624 --rc genhtml_function_coverage=1 00:21:05.624 --rc genhtml_legend=1 00:21:05.624 --rc geninfo_all_blocks=1 00:21:05.624 --rc geninfo_unexecuted_blocks=1 00:21:05.624 00:21:05.624 ' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.624 --rc genhtml_branch_coverage=1 00:21:05.624 --rc genhtml_function_coverage=1 00:21:05.624 --rc genhtml_legend=1 00:21:05.624 --rc geninfo_all_blocks=1 00:21:05.624 --rc geninfo_unexecuted_blocks=1 00:21:05.624 00:21:05.624 ' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.624 --rc genhtml_branch_coverage=1 00:21:05.624 --rc genhtml_function_coverage=1 00:21:05.624 --rc genhtml_legend=1 00:21:05.624 --rc geninfo_all_blocks=1 00:21:05.624 --rc geninfo_unexecuted_blocks=1 00:21:05.624 00:21:05.624 ' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.624 --rc genhtml_branch_coverage=1 00:21:05.624 --rc genhtml_function_coverage=1 00:21:05.624 --rc genhtml_legend=1 00:21:05.624 --rc geninfo_all_blocks=1 00:21:05.624 --rc geninfo_unexecuted_blocks=1 00:21:05.624 00:21:05.624 ' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.624 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.765 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.765 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.765 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.765 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:21:13.766 00:21:13.766 --- 10.0.0.2 ping statistics --- 00:21:13.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.766 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:21:13.766 00:21:13.766 --- 10.0.0.1 ping statistics --- 00:21:13.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.766 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3694830 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3694830 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3694830 ']' 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.766 19:59:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:13.766 [2024-11-26 19:59:13.834242] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:21:13.766 [2024-11-26 19:59:13.834313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.766 [2024-11-26 19:59:13.934562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.766 [2024-11-26 19:59:13.986134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.766 [2024-11-26 19:59:13.986193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.766 [2024-11-26 19:59:13.986202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.766 [2024-11-26 19:59:13.986209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.766 [2024-11-26 19:59:13.986216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.766 [2024-11-26 19:59:13.987003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 [2024-11-26 19:59:14.682452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 Malloc0 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.026 [2024-11-26 19:59:14.736792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3695067 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3695068 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3695069 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3695067 00:21:14.026 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.286 [2024-11-26 19:59:14.847700] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:14.286 [2024-11-26 19:59:14.848013] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:14.286 [2024-11-26 19:59:14.848300] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.228 Initializing NVMe Controllers 00:21:15.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:15.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:15.229 Initialization complete. Launching workers. 00:21:15.229 ======================================================== 00:21:15.229 Latency(us) 00:21:15.229 Device Information : IOPS MiB/s Average min max 00:21:15.229 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1472.00 5.75 679.07 172.68 1056.36 00:21:15.229 ======================================================== 00:21:15.229 Total : 1472.00 5.75 679.07 172.68 1056.36 00:21:15.229 00:21:15.229 19:59:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3695068 00:21:15.229 Initializing NVMe Controllers 00:21:15.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:15.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:15.229 Initialization complete. Launching workers. 00:21:15.229 ======================================================== 00:21:15.229 Latency(us) 00:21:15.229 Device Information : IOPS MiB/s Average min max 00:21:15.229 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1469.00 5.74 680.86 148.66 910.65 00:21:15.229 ======================================================== 00:21:15.229 Total : 1469.00 5.74 680.86 148.66 910.65 00:21:15.229 00:21:15.229 Initializing NVMe Controllers 00:21:15.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:15.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:15.229 Initialization complete. Launching workers. 00:21:15.229 ======================================================== 00:21:15.229 Latency(us) 00:21:15.229 Device Information : IOPS MiB/s Average min max 00:21:15.229 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40912.45 40741.70 41072.99 00:21:15.229 ======================================================== 00:21:15.229 Total : 25.00 0.10 40912.45 40741.70 41072.99 00:21:15.229 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3695069 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.229 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.493 rmmod nvme_tcp 00:21:15.493 rmmod nvme_fabrics 00:21:15.493 rmmod nvme_keyring 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3694830 ']' 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3694830 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3694830 ']' 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3694830 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3694830 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3694830' 00:21:15.493 killing process with pid 3694830 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3694830 00:21:15.493 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3694830 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.754 19:59:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.666 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.666 00:21:17.666 real 0m12.462s 00:21:17.666 user 0m7.946s 00:21:17.666 sys 0m6.713s 00:21:17.666 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.666 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:17.667 ************************************ 00:21:17.667 END TEST nvmf_control_msg_list 00:21:17.667 ************************************ 00:21:17.667 19:59:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:17.667 19:59:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.667 19:59:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.667 19:59:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.928 ************************************ 00:21:17.928 START TEST nvmf_wait_for_buf 00:21:17.928 ************************************ 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:17.928 * Looking for test storage... 00:21:17.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.928 --rc genhtml_branch_coverage=1 00:21:17.928 --rc genhtml_function_coverage=1 00:21:17.928 --rc genhtml_legend=1 00:21:17.928 --rc geninfo_all_blocks=1 00:21:17.928 --rc geninfo_unexecuted_blocks=1 00:21:17.928 00:21:17.928 ' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.928 --rc genhtml_branch_coverage=1 00:21:17.928 --rc genhtml_function_coverage=1 00:21:17.928 --rc genhtml_legend=1 00:21:17.928 --rc geninfo_all_blocks=1 00:21:17.928 --rc geninfo_unexecuted_blocks=1 00:21:17.928 00:21:17.928 ' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.928 --rc genhtml_branch_coverage=1 00:21:17.928 --rc genhtml_function_coverage=1 00:21:17.928 --rc genhtml_legend=1 00:21:17.928 --rc geninfo_all_blocks=1 00:21:17.928 --rc geninfo_unexecuted_blocks=1 00:21:17.928 00:21:17.928 ' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.928 --rc genhtml_branch_coverage=1 00:21:17.928 --rc genhtml_function_coverage=1 00:21:17.928 --rc genhtml_legend=1 00:21:17.928 --rc geninfo_all_blocks=1 00:21:17.928 --rc geninfo_unexecuted_blocks=1 00:21:17.928 00:21:17.928 ' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.928 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.929 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.929 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.190 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.191 19:59:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:26.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:26.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:26.332 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:26.332 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.332 19:59:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.332 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.332 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.332 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.332 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:21:26.333 00:21:26.333 --- 10.0.0.2 ping statistics --- 00:21:26.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.333 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:21:26.333 00:21:26.333 --- 10.0.0.1 ping statistics --- 00:21:26.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.333 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3699466 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3699466 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3699466 ']' 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.333 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.333 [2024-11-26 19:59:26.340916] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:21:26.333 [2024-11-26 19:59:26.340987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.333 [2024-11-26 19:59:26.439871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.333 [2024-11-26 19:59:26.491407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.333 [2024-11-26 19:59:26.491458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.333 [2024-11-26 19:59:26.491469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.333 [2024-11-26 19:59:26.491477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.333 [2024-11-26 19:59:26.491483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.333 [2024-11-26 19:59:26.492296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 Malloc0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 [2024-11-26 19:59:27.318174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.594 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.595 [2024-11-26 19:59:27.354507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.595 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.595 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:26.855 [2024-11-26 19:59:27.461282] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:28.240 Initializing NVMe Controllers 00:21:28.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:28.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:28.240 Initialization complete. Launching workers. 00:21:28.240 ======================================================== 00:21:28.240 Latency(us) 00:21:28.240 Device Information : IOPS MiB/s Average min max 00:21:28.240 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165837.67 47869.22 191551.89 00:21:28.240 ======================================================== 00:21:28.240 Total : 25.00 3.12 165837.67 47869.22 191551.89 00:21:28.240 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.240 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.502 rmmod nvme_tcp 00:21:28.502 rmmod nvme_fabrics 00:21:28.502 rmmod nvme_keyring 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3699466 ']' 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3699466 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3699466 ']' 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3699466 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699466 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699466' 00:21:28.502 killing process with pid 3699466 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3699466 00:21:28.502 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3699466 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.763 19:59:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:30.677 00:21:30.677 real 0m12.910s 00:21:30.677 user 0m5.165s 00:21:30.677 sys 0m6.334s 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:30.677 ************************************ 00:21:30.677 END TEST nvmf_wait_for_buf 00:21:30.677 ************************************ 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:30.677 19:59:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:38.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:38.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:38.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:38.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.819 ************************************ 00:21:38.819 START TEST nvmf_perf_adq 00:21:38.819 ************************************ 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:38.819 * Looking for test storage... 00:21:38.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:38.819 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.820 --rc genhtml_branch_coverage=1 00:21:38.820 --rc genhtml_function_coverage=1 00:21:38.820 --rc genhtml_legend=1 00:21:38.820 --rc geninfo_all_blocks=1 00:21:38.820 --rc geninfo_unexecuted_blocks=1 00:21:38.820 00:21:38.820 ' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.820 --rc genhtml_branch_coverage=1 00:21:38.820 --rc genhtml_function_coverage=1 00:21:38.820 --rc genhtml_legend=1 00:21:38.820 --rc geninfo_all_blocks=1 00:21:38.820 --rc geninfo_unexecuted_blocks=1 00:21:38.820 00:21:38.820 ' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.820 --rc genhtml_branch_coverage=1 00:21:38.820 --rc genhtml_function_coverage=1 00:21:38.820 --rc genhtml_legend=1 00:21:38.820 --rc geninfo_all_blocks=1 00:21:38.820 --rc geninfo_unexecuted_blocks=1 00:21:38.820 00:21:38.820 ' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.820 --rc genhtml_branch_coverage=1 00:21:38.820 --rc genhtml_function_coverage=1 00:21:38.820 --rc genhtml_legend=1 00:21:38.820 --rc geninfo_all_blocks=1 00:21:38.820 --rc geninfo_unexecuted_blocks=1 00:21:38.820 00:21:38.820 ' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.820 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.413 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:45.414 19:59:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:47.387 19:59:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:49.302 19:59:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:54.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:54.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:54.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:54.596 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:54.597 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.597 19:59:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:21:54.597 00:21:54.597 --- 10.0.0.2 ping statistics --- 00:21:54.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.597 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:54.597 00:21:54.597 --- 10.0.0.1 ping statistics --- 00:21:54.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.597 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3709696 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3709696 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3709696 ']' 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.597 19:59:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.597 [2024-11-26 19:59:55.280021] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:21:54.597 [2024-11-26 19:59:55.280090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.597 [2024-11-26 19:59:55.380471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.859 [2024-11-26 19:59:55.435023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.859 [2024-11-26 19:59:55.435077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.859 [2024-11-26 19:59:55.435086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.859 [2024-11-26 19:59:55.435094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.859 [2024-11-26 19:59:55.435100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.859 [2024-11-26 19:59:55.437513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.859 [2024-11-26 19:59:55.437674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.859 [2024-11-26 19:59:55.437839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.859 [2024-11-26 19:59:55.437840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.431 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.692 [2024-11-26 19:59:56.303336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.692 Malloc1 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.692 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.693 [2024-11-26 19:59:56.380019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3710021 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:55.693 19:59:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:57.604 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:57.604 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.604 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:57.604 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.604 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:57.604 "tick_rate": 2400000000, 00:21:57.604 "poll_groups": [ 00:21:57.604 { 00:21:57.604 "name": "nvmf_tgt_poll_group_000", 00:21:57.604 "admin_qpairs": 1, 00:21:57.604 "io_qpairs": 1, 00:21:57.604 "current_admin_qpairs": 1, 00:21:57.604 "current_io_qpairs": 1, 00:21:57.604 "pending_bdev_io": 0, 00:21:57.604 "completed_nvme_io": 16150, 00:21:57.604 "transports": [ 00:21:57.604 { 00:21:57.604 "trtype": "TCP" 00:21:57.604 } 00:21:57.604 ] 00:21:57.604 }, 00:21:57.604 { 00:21:57.604 "name": "nvmf_tgt_poll_group_001", 00:21:57.604 "admin_qpairs": 0, 00:21:57.604 "io_qpairs": 1, 00:21:57.604 "current_admin_qpairs": 0, 00:21:57.604 "current_io_qpairs": 1, 00:21:57.604 "pending_bdev_io": 0, 00:21:57.604 "completed_nvme_io": 16232, 00:21:57.604 "transports": [ 00:21:57.604 { 00:21:57.604 "trtype": "TCP" 00:21:57.604 } 00:21:57.604 ] 00:21:57.604 }, 00:21:57.604 { 00:21:57.604 "name": "nvmf_tgt_poll_group_002", 00:21:57.604 "admin_qpairs": 0, 00:21:57.604 "io_qpairs": 1, 00:21:57.604 "current_admin_qpairs": 0, 00:21:57.604 "current_io_qpairs": 1, 00:21:57.604 "pending_bdev_io": 0, 00:21:57.604 "completed_nvme_io": 16625, 00:21:57.604 "transports": [ 00:21:57.604 { 00:21:57.604 "trtype": "TCP" 00:21:57.604 } 00:21:57.604 ] 00:21:57.604 }, 00:21:57.604 { 00:21:57.604 "name": "nvmf_tgt_poll_group_003", 00:21:57.604 "admin_qpairs": 0, 00:21:57.604 "io_qpairs": 1, 00:21:57.604 "current_admin_qpairs": 0, 00:21:57.604 "current_io_qpairs": 1, 00:21:57.604 "pending_bdev_io": 0, 00:21:57.604 "completed_nvme_io": 15837, 00:21:57.604 "transports": [ 00:21:57.604 { 00:21:57.604 "trtype": "TCP" 00:21:57.604 } 00:21:57.604 ] 00:21:57.604 } 00:21:57.604 ] 00:21:57.604 }' 00:21:57.864 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:57.864 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:57.864 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:57.864 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:57.864 19:59:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3710021 00:22:05.996 Initializing NVMe Controllers 00:22:05.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.996 Initialization complete. Launching workers. 00:22:05.996 ======================================================== 00:22:05.996 Latency(us) 00:22:05.996 Device Information : IOPS MiB/s Average min max 00:22:05.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12880.30 50.31 4969.03 1407.74 12566.20 00:22:05.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13063.30 51.03 4898.82 1102.50 12559.54 00:22:05.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12865.30 50.26 4975.21 1047.07 13061.00 00:22:05.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12486.50 48.78 5125.97 1246.33 13593.34 00:22:05.996 ======================================================== 00:22:05.996 Total : 51295.39 200.37 4990.90 1047.07 13593.34 00:22:05.996 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.996 rmmod nvme_tcp 00:22:05.996 rmmod nvme_fabrics 00:22:05.996 rmmod nvme_keyring 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3709696 ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3709696 ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709696' 00:22:05.996 killing process with pid 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3709696 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.996 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.084 20:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.084 20:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:08.084 20:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:08.084 20:00:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:09.996 20:00:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:11.909 20:00:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:17.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:17.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.194 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:17.195 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:17.195 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.195 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:22:17.454 00:22:17.454 --- 10.0.0.2 ping statistics --- 00:22:17.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.454 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:17.454 00:22:17.454 --- 10.0.0.1 ping statistics --- 00:22:17.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.454 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:17.454 net.core.busy_poll = 1 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:17.454 net.core.busy_read = 1 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:17.454 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3715112 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3715112 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3715112 ']' 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.714 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.714 [2024-11-26 20:00:18.436606] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:17.714 [2024-11-26 20:00:18.436676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.996 [2024-11-26 20:00:18.540250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.996 [2024-11-26 20:00:18.593876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.996 [2024-11-26 20:00:18.593930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.996 [2024-11-26 20:00:18.593940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.996 [2024-11-26 20:00:18.593947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.996 [2024-11-26 20:00:18.593954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.996 [2024-11-26 20:00:18.595996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.996 [2024-11-26 20:00:18.596135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.996 [2024-11-26 20:00:18.596274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.996 [2024-11-26 20:00:18.596454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.566 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.825 [2024-11-26 20:00:19.462525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.825 Malloc1 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.825 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.826 [2024-11-26 20:00:19.535940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3715410 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:18.826 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:20.738 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:20.738 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.738 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:20.997 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.997 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:20.997 "tick_rate": 2400000000, 00:22:20.997 "poll_groups": [ 00:22:20.997 { 00:22:20.997 "name": "nvmf_tgt_poll_group_000", 00:22:20.997 "admin_qpairs": 1, 00:22:20.997 "io_qpairs": 4, 00:22:20.997 "current_admin_qpairs": 1, 00:22:20.997 "current_io_qpairs": 4, 00:22:20.997 "pending_bdev_io": 0, 00:22:20.997 "completed_nvme_io": 40288, 00:22:20.997 "transports": [ 00:22:20.997 { 00:22:20.997 "trtype": "TCP" 00:22:20.997 } 00:22:20.997 ] 00:22:20.997 }, 00:22:20.997 { 00:22:20.997 "name": "nvmf_tgt_poll_group_001", 00:22:20.997 "admin_qpairs": 0, 00:22:20.997 "io_qpairs": 0, 00:22:20.997 "current_admin_qpairs": 0, 00:22:20.998 "current_io_qpairs": 0, 00:22:20.998 "pending_bdev_io": 0, 00:22:20.998 "completed_nvme_io": 0, 00:22:20.998 "transports": [ 00:22:20.998 { 00:22:20.998 "trtype": "TCP" 00:22:20.998 } 00:22:20.998 ] 00:22:20.998 }, 00:22:20.998 { 00:22:20.998 "name": "nvmf_tgt_poll_group_002", 00:22:20.998 "admin_qpairs": 0, 00:22:20.998 "io_qpairs": 0, 00:22:20.998 "current_admin_qpairs": 0, 00:22:20.998 "current_io_qpairs": 0, 00:22:20.998 "pending_bdev_io": 0, 00:22:20.998 "completed_nvme_io": 0, 00:22:20.998 "transports": [ 00:22:20.998 { 00:22:20.998 "trtype": "TCP" 00:22:20.998 } 00:22:20.998 ] 00:22:20.998 }, 00:22:20.998 { 00:22:20.998 "name": "nvmf_tgt_poll_group_003", 00:22:20.998 "admin_qpairs": 0, 00:22:20.998 "io_qpairs": 0, 00:22:20.998 "current_admin_qpairs": 0, 00:22:20.998 "current_io_qpairs": 0, 00:22:20.998 "pending_bdev_io": 0, 00:22:20.998 "completed_nvme_io": 0, 00:22:20.998 "transports": [ 00:22:20.998 { 00:22:20.998 "trtype": "TCP" 00:22:20.998 } 00:22:20.998 ] 00:22:20.998 } 00:22:20.998 ] 00:22:20.998 }' 00:22:20.998 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:20.998 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:20.998 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:20.998 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:20.998 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3715410 00:22:29.129 Initializing NVMe Controllers 00:22:29.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:29.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:29.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:29.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:29.129 Initialization complete. Launching workers. 00:22:29.129 ======================================================== 00:22:29.129 Latency(us) 00:22:29.129 Device Information : IOPS MiB/s Average min max 00:22:29.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6148.60 24.02 10446.22 1397.74 59976.40 00:22:29.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6912.00 27.00 9261.05 1151.51 54840.47 00:22:29.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5963.10 23.29 10734.13 1144.29 60291.26 00:22:29.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6533.20 25.52 9796.73 991.64 57018.72 00:22:29.129 ======================================================== 00:22:29.129 Total : 25556.90 99.83 10026.83 991.64 60291.26 00:22:29.129 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.129 rmmod nvme_tcp 00:22:29.129 rmmod nvme_fabrics 00:22:29.129 rmmod nvme_keyring 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:29.129 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3715112 ']' 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3715112 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3715112 ']' 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3715112 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3715112 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3715112' 00:22:29.130 killing process with pid 3715112 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3715112 00:22:29.130 20:00:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3715112 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.390 20:00:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:32.690 00:22:32.690 real 0m54.435s 00:22:32.690 user 2m50.415s 00:22:32.690 sys 0m11.206s 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.690 ************************************ 00:22:32.690 END TEST nvmf_perf_adq 00:22:32.690 ************************************ 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:32.690 ************************************ 00:22:32.690 START TEST nvmf_shutdown 00:22:32.690 ************************************ 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:32.690 * Looking for test storage... 00:22:32.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.690 --rc genhtml_branch_coverage=1 00:22:32.690 --rc genhtml_function_coverage=1 00:22:32.690 --rc genhtml_legend=1 00:22:32.690 --rc geninfo_all_blocks=1 00:22:32.690 --rc geninfo_unexecuted_blocks=1 00:22:32.690 00:22:32.690 ' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.690 --rc genhtml_branch_coverage=1 00:22:32.690 --rc genhtml_function_coverage=1 00:22:32.690 --rc genhtml_legend=1 00:22:32.690 --rc geninfo_all_blocks=1 00:22:32.690 --rc geninfo_unexecuted_blocks=1 00:22:32.690 00:22:32.690 ' 00:22:32.690 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.690 --rc genhtml_branch_coverage=1 00:22:32.690 --rc genhtml_function_coverage=1 00:22:32.690 --rc genhtml_legend=1 00:22:32.690 --rc geninfo_all_blocks=1 00:22:32.691 --rc geninfo_unexecuted_blocks=1 00:22:32.691 00:22:32.691 ' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:32.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.691 --rc genhtml_branch_coverage=1 00:22:32.691 --rc genhtml_function_coverage=1 00:22:32.691 --rc genhtml_legend=1 00:22:32.691 --rc geninfo_all_blocks=1 00:22:32.691 --rc geninfo_unexecuted_blocks=1 00:22:32.691 00:22:32.691 ' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:32.691 ************************************ 00:22:32.691 START TEST nvmf_shutdown_tc1 00:22:32.691 ************************************ 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.691 20:00:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.833 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.834 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.834 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.834 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:40.834 00:22:40.834 --- 10.0.0.2 ping statistics --- 00:22:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.834 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:40.834 00:22:40.834 --- 10.0.0.1 ping statistics --- 00:22:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.834 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.834 20:00:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3721875 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3721875 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3721875 ']' 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.834 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.835 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.835 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.835 [2024-11-26 20:00:41.059942] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:40.835 [2024-11-26 20:00:41.060006] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.835 [2024-11-26 20:00:41.161998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.835 [2024-11-26 20:00:41.217063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.835 [2024-11-26 20:00:41.217115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.835 [2024-11-26 20:00:41.217125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.835 [2024-11-26 20:00:41.217142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.835 [2024-11-26 20:00:41.217152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.835 [2024-11-26 20:00:41.219233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.835 [2024-11-26 20:00:41.219476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.835 [2024-11-26 20:00:41.219637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.835 [2024-11-26 20:00:41.219638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.096 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.096 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:41.096 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.096 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.096 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 [2024-11-26 20:00:41.943291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.357 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 Malloc1 00:22:41.357 [2024-11-26 20:00:42.083063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.357 Malloc2 00:22:41.357 Malloc3 00:22:41.618 Malloc4 00:22:41.618 Malloc5 00:22:41.618 Malloc6 00:22:41.618 Malloc7 00:22:41.618 Malloc8 00:22:41.618 Malloc9 00:22:41.880 Malloc10 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3722264 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3722264 /var/tmp/bdevperf.sock 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3722264 ']' 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.880 { 00:22:41.880 "params": { 00:22:41.880 "name": "Nvme$subsystem", 00:22:41.880 "trtype": "$TEST_TRANSPORT", 00:22:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.880 "adrfam": "ipv4", 00:22:41.880 "trsvcid": "$NVMF_PORT", 00:22:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.880 "hdgst": ${hdgst:-false}, 00:22:41.880 "ddgst": ${ddgst:-false} 00:22:41.880 }, 00:22:41.880 "method": "bdev_nvme_attach_controller" 00:22:41.880 } 00:22:41.880 EOF 00:22:41.880 )") 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.880 { 00:22:41.880 "params": { 00:22:41.880 "name": "Nvme$subsystem", 00:22:41.880 "trtype": "$TEST_TRANSPORT", 00:22:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.880 "adrfam": "ipv4", 00:22:41.880 "trsvcid": "$NVMF_PORT", 00:22:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.880 "hdgst": ${hdgst:-false}, 00:22:41.880 "ddgst": ${ddgst:-false} 00:22:41.880 }, 00:22:41.880 "method": "bdev_nvme_attach_controller" 00:22:41.880 } 00:22:41.880 EOF 00:22:41.880 )") 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.880 { 00:22:41.880 "params": { 00:22:41.880 "name": "Nvme$subsystem", 00:22:41.880 "trtype": "$TEST_TRANSPORT", 00:22:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.880 "adrfam": "ipv4", 00:22:41.880 "trsvcid": "$NVMF_PORT", 00:22:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.880 "hdgst": ${hdgst:-false}, 00:22:41.880 "ddgst": ${ddgst:-false} 00:22:41.880 }, 00:22:41.880 "method": "bdev_nvme_attach_controller" 00:22:41.880 } 00:22:41.880 EOF 00:22:41.880 )") 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.880 { 00:22:41.880 "params": { 00:22:41.880 "name": "Nvme$subsystem", 00:22:41.880 "trtype": "$TEST_TRANSPORT", 00:22:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.880 "adrfam": "ipv4", 00:22:41.880 "trsvcid": "$NVMF_PORT", 00:22:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.880 "hdgst": ${hdgst:-false}, 00:22:41.880 "ddgst": ${ddgst:-false} 00:22:41.880 }, 00:22:41.880 "method": "bdev_nvme_attach_controller" 00:22:41.880 } 00:22:41.880 EOF 00:22:41.880 )") 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.880 { 00:22:41.880 "params": { 00:22:41.880 "name": "Nvme$subsystem", 00:22:41.880 "trtype": "$TEST_TRANSPORT", 00:22:41.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.880 "adrfam": "ipv4", 00:22:41.880 "trsvcid": "$NVMF_PORT", 00:22:41.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.880 "hdgst": ${hdgst:-false}, 00:22:41.880 "ddgst": ${ddgst:-false} 00:22:41.880 }, 00:22:41.880 "method": "bdev_nvme_attach_controller" 00:22:41.880 } 00:22:41.880 EOF 00:22:41.880 )") 00:22:41.880 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.881 { 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme$subsystem", 00:22:41.881 "trtype": "$TEST_TRANSPORT", 00:22:41.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "$NVMF_PORT", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.881 "hdgst": ${hdgst:-false}, 00:22:41.881 "ddgst": ${ddgst:-false} 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 } 00:22:41.881 EOF 00:22:41.881 )") 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 [2024-11-26 20:00:42.596625] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:41.881 [2024-11-26 20:00:42.596701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.881 { 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme$subsystem", 00:22:41.881 "trtype": "$TEST_TRANSPORT", 00:22:41.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "$NVMF_PORT", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.881 "hdgst": ${hdgst:-false}, 00:22:41.881 "ddgst": ${ddgst:-false} 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 } 00:22:41.881 EOF 00:22:41.881 )") 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.881 { 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme$subsystem", 00:22:41.881 "trtype": "$TEST_TRANSPORT", 00:22:41.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "$NVMF_PORT", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.881 "hdgst": ${hdgst:-false}, 00:22:41.881 "ddgst": ${ddgst:-false} 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 } 00:22:41.881 EOF 00:22:41.881 )") 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.881 { 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme$subsystem", 00:22:41.881 "trtype": "$TEST_TRANSPORT", 00:22:41.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "$NVMF_PORT", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.881 "hdgst": ${hdgst:-false}, 00:22:41.881 "ddgst": ${ddgst:-false} 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 } 00:22:41.881 EOF 00:22:41.881 )") 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:41.881 { 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme$subsystem", 00:22:41.881 "trtype": "$TEST_TRANSPORT", 00:22:41.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "$NVMF_PORT", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.881 "hdgst": ${hdgst:-false}, 00:22:41.881 "ddgst": ${ddgst:-false} 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 } 00:22:41.881 EOF 00:22:41.881 )") 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:41.881 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme1", 00:22:41.881 "trtype": "tcp", 00:22:41.881 "traddr": "10.0.0.2", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "4420", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.881 "hdgst": false, 00:22:41.881 "ddgst": false 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 },{ 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme2", 00:22:41.881 "trtype": "tcp", 00:22:41.881 "traddr": "10.0.0.2", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "4420", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:41.881 "hdgst": false, 00:22:41.881 "ddgst": false 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 },{ 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme3", 00:22:41.881 "trtype": "tcp", 00:22:41.881 "traddr": "10.0.0.2", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "4420", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:41.881 "hdgst": false, 00:22:41.881 "ddgst": false 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 },{ 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme4", 00:22:41.881 "trtype": "tcp", 00:22:41.881 "traddr": "10.0.0.2", 00:22:41.881 "adrfam": "ipv4", 00:22:41.881 "trsvcid": "4420", 00:22:41.881 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:41.881 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:41.881 "hdgst": false, 00:22:41.881 "ddgst": false 00:22:41.881 }, 00:22:41.881 "method": "bdev_nvme_attach_controller" 00:22:41.881 },{ 00:22:41.881 "params": { 00:22:41.881 "name": "Nvme5", 00:22:41.881 "trtype": "tcp", 00:22:41.881 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 },{ 00:22:41.882 "params": { 00:22:41.882 "name": "Nvme6", 00:22:41.882 "trtype": "tcp", 00:22:41.882 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 },{ 00:22:41.882 "params": { 00:22:41.882 "name": "Nvme7", 00:22:41.882 "trtype": "tcp", 00:22:41.882 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 },{ 00:22:41.882 "params": { 00:22:41.882 "name": "Nvme8", 00:22:41.882 "trtype": "tcp", 00:22:41.882 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 },{ 00:22:41.882 "params": { 00:22:41.882 "name": "Nvme9", 00:22:41.882 "trtype": "tcp", 00:22:41.882 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 },{ 00:22:41.882 "params": { 00:22:41.882 "name": "Nvme10", 00:22:41.882 "trtype": "tcp", 00:22:41.882 "traddr": "10.0.0.2", 00:22:41.882 "adrfam": "ipv4", 00:22:41.882 "trsvcid": "4420", 00:22:41.882 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:41.882 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:41.882 "hdgst": false, 00:22:41.882 "ddgst": false 00:22:41.882 }, 00:22:41.882 "method": "bdev_nvme_attach_controller" 00:22:41.882 }' 00:22:41.882 [2024-11-26 20:00:42.691824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.143 [2024-11-26 20:00:42.745077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3722264 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:43.526 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:44.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3722264 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3721875 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.467 { 00:22:44.467 "params": { 00:22:44.467 "name": "Nvme$subsystem", 00:22:44.467 "trtype": "$TEST_TRANSPORT", 00:22:44.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.467 "adrfam": "ipv4", 00:22:44.467 "trsvcid": "$NVMF_PORT", 00:22:44.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.467 "hdgst": ${hdgst:-false}, 00:22:44.467 "ddgst": ${ddgst:-false} 00:22:44.467 }, 00:22:44.467 "method": "bdev_nvme_attach_controller" 00:22:44.467 } 00:22:44.467 EOF 00:22:44.467 )") 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.467 { 00:22:44.467 "params": { 00:22:44.467 "name": "Nvme$subsystem", 00:22:44.467 "trtype": "$TEST_TRANSPORT", 00:22:44.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.467 "adrfam": "ipv4", 00:22:44.467 "trsvcid": "$NVMF_PORT", 00:22:44.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.467 "hdgst": ${hdgst:-false}, 00:22:44.467 "ddgst": ${ddgst:-false} 00:22:44.467 }, 00:22:44.467 "method": "bdev_nvme_attach_controller" 00:22:44.467 } 00:22:44.467 EOF 00:22:44.467 )") 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.467 { 00:22:44.467 "params": { 00:22:44.467 "name": "Nvme$subsystem", 00:22:44.467 "trtype": "$TEST_TRANSPORT", 00:22:44.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.467 "adrfam": "ipv4", 00:22:44.467 "trsvcid": "$NVMF_PORT", 00:22:44.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.467 "hdgst": ${hdgst:-false}, 00:22:44.467 "ddgst": ${ddgst:-false} 00:22:44.467 }, 00:22:44.467 "method": "bdev_nvme_attach_controller" 00:22:44.467 } 00:22:44.467 EOF 00:22:44.467 )") 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.467 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.467 { 00:22:44.467 "params": { 00:22:44.467 "name": "Nvme$subsystem", 00:22:44.467 "trtype": "$TEST_TRANSPORT", 00:22:44.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.467 "adrfam": "ipv4", 00:22:44.467 "trsvcid": "$NVMF_PORT", 00:22:44.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.467 "hdgst": ${hdgst:-false}, 00:22:44.467 "ddgst": ${ddgst:-false} 00:22:44.467 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 [2024-11-26 20:00:45.055182] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:44.468 [2024-11-26 20:00:45.055236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722655 ] 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.468 { 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme$subsystem", 00:22:44.468 "trtype": "$TEST_TRANSPORT", 00:22:44.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "$NVMF_PORT", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.468 "hdgst": ${hdgst:-false}, 00:22:44.468 "ddgst": ${ddgst:-false} 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 } 00:22:44.468 EOF 00:22:44.468 )") 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.468 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme1", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme2", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme3", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme4", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme5", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme6", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme7", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.468 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.468 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.468 "hdgst": false, 00:22:44.468 "ddgst": false 00:22:44.468 }, 00:22:44.468 "method": "bdev_nvme_attach_controller" 00:22:44.468 },{ 00:22:44.468 "params": { 00:22:44.468 "name": "Nvme8", 00:22:44.468 "trtype": "tcp", 00:22:44.468 "traddr": "10.0.0.2", 00:22:44.468 "adrfam": "ipv4", 00:22:44.468 "trsvcid": "4420", 00:22:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.469 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.469 "hdgst": false, 00:22:44.469 "ddgst": false 00:22:44.469 }, 00:22:44.469 "method": "bdev_nvme_attach_controller" 00:22:44.469 },{ 00:22:44.469 "params": { 00:22:44.469 "name": "Nvme9", 00:22:44.469 "trtype": "tcp", 00:22:44.469 "traddr": "10.0.0.2", 00:22:44.469 "adrfam": "ipv4", 00:22:44.469 "trsvcid": "4420", 00:22:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.469 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.469 "hdgst": false, 00:22:44.469 "ddgst": false 00:22:44.469 }, 00:22:44.469 "method": "bdev_nvme_attach_controller" 00:22:44.469 },{ 00:22:44.469 "params": { 00:22:44.469 "name": "Nvme10", 00:22:44.469 "trtype": "tcp", 00:22:44.469 "traddr": "10.0.0.2", 00:22:44.469 "adrfam": "ipv4", 00:22:44.469 "trsvcid": "4420", 00:22:44.469 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.469 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.469 "hdgst": false, 00:22:44.469 "ddgst": false 00:22:44.469 }, 00:22:44.469 "method": "bdev_nvme_attach_controller" 00:22:44.469 }' 00:22:44.469 [2024-11-26 20:00:45.144393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.469 [2024-11-26 20:00:45.180302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.854 Running I/O for 1 seconds... 00:22:47.057 1799.00 IOPS, 112.44 MiB/s 00:22:47.057 Latency(us) 00:22:47.057 [2024-11-26T19:00:47.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.057 Verification LBA range: start 0x0 length 0x400 00:22:47.057 Nvme1n1 : 1.14 223.91 13.99 0.00 0.00 282852.27 19005.44 255153.49 00:22:47.057 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.057 Verification LBA range: start 0x0 length 0x400 00:22:47.057 Nvme2n1 : 1.02 187.87 11.74 0.00 0.00 330584.75 18350.08 265639.25 00:22:47.057 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.057 Verification LBA range: start 0x0 length 0x400 00:22:47.057 Nvme3n1 : 1.08 237.31 14.83 0.00 0.00 256957.44 16930.13 239424.85 00:22:47.057 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.057 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme4n1 : 1.11 233.24 14.58 0.00 0.00 251862.78 20097.71 249910.61 00:22:47.058 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme5n1 : 1.09 239.63 14.98 0.00 0.00 244260.50 1160.53 251658.24 00:22:47.058 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme6n1 : 1.19 269.89 16.87 0.00 0.00 214734.68 17803.95 228939.09 00:22:47.058 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme7n1 : 1.19 268.90 16.81 0.00 0.00 212271.62 12397.23 248162.99 00:22:47.058 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme8n1 : 1.15 223.04 13.94 0.00 0.00 250084.27 13489.49 256901.12 00:22:47.058 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme9n1 : 1.19 321.71 20.11 0.00 0.00 170817.42 8683.52 249910.61 00:22:47.058 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.058 Verification LBA range: start 0x0 length 0x400 00:22:47.058 Nvme10n1 : 1.18 224.00 14.00 0.00 0.00 240066.07 1303.89 279620.27 00:22:47.058 [2024-11-26T19:00:47.879Z] =================================================================================================================== 00:22:47.058 [2024-11-26T19:00:47.879Z] Total : 2429.50 151.84 0.00 0.00 238542.07 1160.53 279620.27 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.058 rmmod nvme_tcp 00:22:47.058 rmmod nvme_fabrics 00:22:47.058 rmmod nvme_keyring 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3721875 ']' 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3721875 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3721875 ']' 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3721875 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.058 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3721875 00:22:47.319 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.319 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.319 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3721875' 00:22:47.319 killing process with pid 3721875 00:22:47.319 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3721875 00:22:47.319 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3721875 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.580 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.497 00:22:49.497 real 0m16.781s 00:22:49.497 user 0m33.688s 00:22:49.497 sys 0m6.901s 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:49.497 ************************************ 00:22:49.497 END TEST nvmf_shutdown_tc1 00:22:49.497 ************************************ 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.497 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.759 ************************************ 00:22:49.759 START TEST nvmf_shutdown_tc2 00:22:49.759 ************************************ 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.759 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.759 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.759 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.760 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:22:50.021 00:22:50.021 --- 10.0.0.2 ping statistics --- 00:22:50.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.021 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:22:50.021 00:22:50.021 --- 10.0.0.1 ping statistics --- 00:22:50.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.021 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3723926 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3723926 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3723926 ']' 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.021 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.021 [2024-11-26 20:00:50.761776] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:50.021 [2024-11-26 20:00:50.761829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.021 [2024-11-26 20:00:50.829058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.282 [2024-11-26 20:00:50.858858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.282 [2024-11-26 20:00:50.858885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.282 [2024-11-26 20:00:50.858892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.282 [2024-11-26 20:00:50.858896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.282 [2024-11-26 20:00:50.858901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.282 [2024-11-26 20:00:50.860146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.282 [2024-11-26 20:00:50.860296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.282 [2024-11-26 20:00:50.860584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.282 [2024-11-26 20:00:50.860585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.283 20:00:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.283 [2024-11-26 20:00:50.996359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.283 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.283 Malloc1 00:22:50.544 [2024-11-26 20:00:51.103510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.544 Malloc2 00:22:50.544 Malloc3 00:22:50.544 Malloc4 00:22:50.544 Malloc5 00:22:50.544 Malloc6 00:22:50.544 Malloc7 00:22:50.544 Malloc8 00:22:50.805 Malloc9 00:22:50.805 Malloc10 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3724124 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3724124 /var/tmp/bdevperf.sock 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3724124 ']' 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.805 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 [2024-11-26 20:00:51.552545] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:50.806 [2024-11-26 20:00:51.552598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724124 ] 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.806 "params": { 00:22:50.806 "name": "Nvme$subsystem", 00:22:50.806 "trtype": "$TEST_TRANSPORT", 00:22:50.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.806 "adrfam": "ipv4", 00:22:50.806 "trsvcid": "$NVMF_PORT", 00:22:50.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.806 "hdgst": ${hdgst:-false}, 00:22:50.806 "ddgst": ${ddgst:-false} 00:22:50.806 }, 00:22:50.806 "method": "bdev_nvme_attach_controller" 00:22:50.806 } 00:22:50.806 EOF 00:22:50.806 )") 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.806 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.806 { 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme$subsystem", 00:22:50.807 "trtype": "$TEST_TRANSPORT", 00:22:50.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "$NVMF_PORT", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.807 "hdgst": ${hdgst:-false}, 00:22:50.807 "ddgst": ${ddgst:-false} 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 } 00:22:50.807 EOF 00:22:50.807 )") 00:22:50.807 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.807 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:50.807 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.807 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme1", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme2", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme3", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme4", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme5", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme6", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme7", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme8", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme9", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 },{ 00:22:50.807 "params": { 00:22:50.807 "name": "Nvme10", 00:22:50.807 "trtype": "tcp", 00:22:50.807 "traddr": "10.0.0.2", 00:22:50.807 "adrfam": "ipv4", 00:22:50.807 "trsvcid": "4420", 00:22:50.807 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.807 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.807 "hdgst": false, 00:22:50.807 "ddgst": false 00:22:50.807 }, 00:22:50.807 "method": "bdev_nvme_attach_controller" 00:22:50.807 }' 00:22:51.091 [2024-11-26 20:00:51.641792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.092 [2024-11-26 20:00:51.678333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.475 Running I/O for 10 seconds... 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.475 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.735 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.735 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:52.735 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:52.735 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:53.047 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3724124 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3724124 ']' 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3724124 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.307 20:00:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724124 00:22:53.307 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.307 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.307 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724124' 00:22:53.307 killing process with pid 3724124 00:22:53.307 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3724124 00:22:53.307 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3724124 00:22:53.307 Received shutdown signal, test time was about 1.000189 seconds 00:22:53.307 00:22:53.307 Latency(us) 00:22:53.307 [2024-11-26T19:00:54.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.307 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme1n1 : 1.00 256.18 16.01 0.00 0.00 246933.97 14636.37 249910.61 00:22:53.307 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme2n1 : 0.99 261.99 16.37 0.00 0.00 236084.67 2566.83 248162.99 00:22:53.307 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme3n1 : 0.98 261.72 16.36 0.00 0.00 232241.49 21189.97 242920.11 00:22:53.307 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme4n1 : 0.98 260.67 16.29 0.00 0.00 228437.97 17367.04 246415.36 00:22:53.307 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme5n1 : 0.97 203.43 12.71 0.00 0.00 285164.11 2184.53 253405.87 00:22:53.307 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme6n1 : 1.00 257.08 16.07 0.00 0.00 221765.55 19770.03 244667.73 00:22:53.307 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme7n1 : 0.99 259.76 16.24 0.00 0.00 214924.16 18896.21 242920.11 00:22:53.307 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme8n1 : 0.99 258.59 16.16 0.00 0.00 211261.87 20862.29 241172.48 00:22:53.307 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme9n1 : 0.98 196.75 12.30 0.00 0.00 271050.24 25777.49 255153.49 00:22:53.307 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.307 Verification LBA range: start 0x0 length 0x400 00:22:53.307 Nvme10n1 : 0.97 197.41 12.34 0.00 0.00 263548.02 20753.07 269134.51 00:22:53.307 [2024-11-26T19:00:54.128Z] =================================================================================================================== 00:22:53.307 [2024-11-26T19:00:54.128Z] Total : 2413.58 150.85 0.00 0.00 238631.41 2184.53 269134.51 00:22:53.567 20:00:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3723926 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.506 rmmod nvme_tcp 00:22:54.506 rmmod nvme_fabrics 00:22:54.506 rmmod nvme_keyring 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3723926 ']' 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3723926 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3723926 ']' 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3723926 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.506 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723926 00:22:54.766 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.766 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.766 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723926' 00:22:54.766 killing process with pid 3723926 00:22:54.766 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3723926 00:22:54.766 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3723926 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.026 20:00:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.940 00:22:56.940 real 0m7.359s 00:22:56.940 user 0m21.727s 00:22:56.940 sys 0m1.246s 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.940 ************************************ 00:22:56.940 END TEST nvmf_shutdown_tc2 00:22:56.940 ************************************ 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.940 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.201 ************************************ 00:22:57.201 START TEST nvmf_shutdown_tc3 00:22:57.201 ************************************ 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.201 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.201 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.201 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.201 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.201 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.202 20:00:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:22:57.463 00:22:57.463 --- 10.0.0.2 ping statistics --- 00:22:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.463 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:22:57.463 00:22:57.463 --- 10.0.0.1 ping statistics --- 00:22:57.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.463 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3725588 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3725588 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3725588 ']' 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.463 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.463 [2024-11-26 20:00:58.230860] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:57.463 [2024-11-26 20:00:58.230924] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.722 [2024-11-26 20:00:58.326115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.722 [2024-11-26 20:00:58.360245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.722 [2024-11-26 20:00:58.360275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.722 [2024-11-26 20:00:58.360281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.722 [2024-11-26 20:00:58.360286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.722 [2024-11-26 20:00:58.360290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.722 [2024-11-26 20:00:58.361610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.722 [2024-11-26 20:00:58.361767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.722 [2024-11-26 20:00:58.361917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.722 [2024-11-26 20:00:58.361919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.293 [2024-11-26 20:00:59.073623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.293 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.560 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.560 Malloc1 00:22:58.560 [2024-11-26 20:00:59.181929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.560 Malloc2 00:22:58.560 Malloc3 00:22:58.560 Malloc4 00:22:58.560 Malloc5 00:22:58.560 Malloc6 00:22:58.822 Malloc7 00:22:58.822 Malloc8 00:22:58.822 Malloc9 00:22:58.822 Malloc10 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3725863 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3725863 /var/tmp/bdevperf.sock 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3725863 ']' 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.822 { 00:22:58.822 "params": { 00:22:58.822 "name": "Nvme$subsystem", 00:22:58.822 "trtype": "$TEST_TRANSPORT", 00:22:58.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.822 "adrfam": "ipv4", 00:22:58.822 "trsvcid": "$NVMF_PORT", 00:22:58.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.822 "hdgst": ${hdgst:-false}, 00:22:58.822 "ddgst": ${ddgst:-false} 00:22:58.822 }, 00:22:58.822 "method": "bdev_nvme_attach_controller" 00:22:58.822 } 00:22:58.822 EOF 00:22:58.822 )") 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.822 { 00:22:58.822 "params": { 00:22:58.822 "name": "Nvme$subsystem", 00:22:58.822 "trtype": "$TEST_TRANSPORT", 00:22:58.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.822 "adrfam": "ipv4", 00:22:58.822 "trsvcid": "$NVMF_PORT", 00:22:58.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.822 "hdgst": ${hdgst:-false}, 00:22:58.822 "ddgst": ${ddgst:-false} 00:22:58.822 }, 00:22:58.822 "method": "bdev_nvme_attach_controller" 00:22:58.822 } 00:22:58.822 EOF 00:22:58.822 )") 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.822 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.822 { 00:22:58.822 "params": { 00:22:58.822 "name": "Nvme$subsystem", 00:22:58.822 "trtype": "$TEST_TRANSPORT", 00:22:58.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.823 { 00:22:58.823 "params": { 00:22:58.823 "name": "Nvme$subsystem", 00:22:58.823 "trtype": "$TEST_TRANSPORT", 00:22:58.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.823 { 00:22:58.823 "params": { 00:22:58.823 "name": "Nvme$subsystem", 00:22:58.823 "trtype": "$TEST_TRANSPORT", 00:22:58.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.823 { 00:22:58.823 "params": { 00:22:58.823 "name": "Nvme$subsystem", 00:22:58.823 "trtype": "$TEST_TRANSPORT", 00:22:58.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.823 [2024-11-26 20:00:59.627181] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:22:58.823 [2024-11-26 20:00:59.627235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725863 ] 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.823 { 00:22:58.823 "params": { 00:22:58.823 "name": "Nvme$subsystem", 00:22:58.823 "trtype": "$TEST_TRANSPORT", 00:22:58.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:58.823 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:58.823 { 00:22:58.823 "params": { 00:22:58.823 "name": "Nvme$subsystem", 00:22:58.823 "trtype": "$TEST_TRANSPORT", 00:22:58.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:58.823 "adrfam": "ipv4", 00:22:58.823 "trsvcid": "$NVMF_PORT", 00:22:58.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:58.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:58.823 "hdgst": ${hdgst:-false}, 00:22:58.823 "ddgst": ${ddgst:-false} 00:22:58.823 }, 00:22:58.823 "method": "bdev_nvme_attach_controller" 00:22:58.823 } 00:22:58.823 EOF 00:22:58.823 )") 00:22:59.084 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:59.084 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.084 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.084 { 00:22:59.084 "params": { 00:22:59.084 "name": "Nvme$subsystem", 00:22:59.084 "trtype": "$TEST_TRANSPORT", 00:22:59.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.084 "adrfam": "ipv4", 00:22:59.084 "trsvcid": "$NVMF_PORT", 00:22:59.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.084 "hdgst": ${hdgst:-false}, 00:22:59.084 "ddgst": ${ddgst:-false} 00:22:59.084 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 } 00:22:59.085 EOF 00:22:59.085 )") 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:59.085 { 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme$subsystem", 00:22:59.085 "trtype": "$TEST_TRANSPORT", 00:22:59.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "$NVMF_PORT", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:59.085 "hdgst": ${hdgst:-false}, 00:22:59.085 "ddgst": ${ddgst:-false} 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 } 00:22:59.085 EOF 00:22:59.085 )") 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:59.085 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme1", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme2", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme3", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme4", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme5", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme6", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme7", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme8", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme9", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 },{ 00:22:59.085 "params": { 00:22:59.085 "name": "Nvme10", 00:22:59.085 "trtype": "tcp", 00:22:59.085 "traddr": "10.0.0.2", 00:22:59.085 "adrfam": "ipv4", 00:22:59.085 "trsvcid": "4420", 00:22:59.085 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:59.085 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:59.085 "hdgst": false, 00:22:59.085 "ddgst": false 00:22:59.085 }, 00:22:59.085 "method": "bdev_nvme_attach_controller" 00:22:59.085 }' 00:22:59.085 [2024-11-26 20:00:59.715367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.085 [2024-11-26 20:00:59.751985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.469 Running I/O for 10 seconds... 00:23:00.469 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.469 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:00.469 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.469 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.469 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:00.730 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:00.990 20:01:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:01.251 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3725588 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3725588 ']' 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3725588 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3725588 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3725588' 00:23:01.535 killing process with pid 3725588 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3725588 00:23:01.535 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3725588 00:23:01.535 [2024-11-26 20:01:02.154977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.535 [2024-11-26 20:01:02.155181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.155326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a3e0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.536 [2024-11-26 20:01:02.157377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.157516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23481d0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.537 [2024-11-26 20:01:02.158532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.158632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23486c0 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.538 [2024-11-26 20:01:02.159350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.159445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348b90 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.539 [2024-11-26 20:01:02.160357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.160426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349060 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349a20 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.540 [2024-11-26 20:01:02.161968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.161996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.162079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ef0 is same with the state(6) to be set 00:23:01.541 [2024-11-26 20:01:02.171632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.541 [2024-11-26 20:01:02.171974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.541 [2024-11-26 20:01:02.171984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.171991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.542 [2024-11-26 20:01:02.172517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.542 [2024-11-26 20:01:02.172524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.543 [2024-11-26 20:01:02.172759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.172790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:01.543 [2024-11-26 20:01:02.191875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.191910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.191920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.191928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.191936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.191944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.191953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.191967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96bfc0 is same with the state(6) to be set 00:23:01.543 [2024-11-26 20:01:02.192000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd7d50 is same with the state(6) to be set 00:23:01.543 [2024-11-26 20:01:02.192094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:23:01.543 [2024-11-26 20:01:02.192190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.543 [2024-11-26 20:01:02.192231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.543 [2024-11-26 20:01:02.192239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885610 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96dcc0 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d850 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ec90 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99180 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5bb0 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.544 [2024-11-26 20:01:02.192770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99980 is same with the state(6) to be set 00:23:01.544 [2024-11-26 20:01:02.192870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.544 [2024-11-26 20:01:02.192881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.544 [2024-11-26 20:01:02.192913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.544 [2024-11-26 20:01:02.192931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.544 [2024-11-26 20:01:02.192940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.544 [2024-11-26 20:01:02.192948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.192957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.192965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.192974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.192982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.192991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.192999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.545 [2024-11-26 20:01:02.193452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.545 [2024-11-26 20:01:02.193462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.546 [2024-11-26 20:01:02.193879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.546 [2024-11-26 20:01:02.193889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.193905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.193922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.193940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.193956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.193973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.193980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.547 [2024-11-26 20:01:02.194491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.547 [2024-11-26 20:01:02.194502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.194992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.194999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.195008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.195016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.548 [2024-11-26 20:01:02.195025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.548 [2024-11-26 20:01:02.195033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.195145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.195152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.196988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.549 [2024-11-26 20:01:02.197092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.549 [2024-11-26 20:01:02.197101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.197108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.197118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.197125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.197134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.550 [2024-11-26 20:01:02.205710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.550 [2024-11-26 20:01:02.205719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.205726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.205736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.205743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.205754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.205778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.205795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.206058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96bfc0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd7d50 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x885610 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dcc0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96d850 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8ec90 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99180 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5bb0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.206217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99980 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.210047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:01.551 [2024-11-26 20:01:02.210090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.551 [2024-11-26 20:01:02.210752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.551 [2024-11-26 20:01:02.210782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:01.551 [2024-11-26 20:01:02.211035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.551 [2024-11-26 20:01:02.211051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x885610 with addr=10.0.0.2, port=4420 00:23:01.551 [2024-11-26 20:01:02.211061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885610 is same with the state(6) to be set 00:23:01.551 [2024-11-26 20:01:02.211525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.551 [2024-11-26 20:01:02.211566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96dcc0 with addr=10.0.0.2, port=4420 00:23:01.551 [2024-11-26 20:01:02.211578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96dcc0 is same with the state(6) to be set 00:23:01.551 [2024-11-26 20:01:02.212183] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.212232] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.212270] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.212317] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.212636] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.212680] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.551 [2024-11-26 20:01:02.213029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.551 [2024-11-26 20:01:02.213044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96d850 with addr=10.0.0.2, port=4420 00:23:01.551 [2024-11-26 20:01:02.213053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d850 is same with the state(6) to be set 00:23:01.551 [2024-11-26 20:01:02.213515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.551 [2024-11-26 20:01:02.213554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:23:01.551 [2024-11-26 20:01:02.213565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:23:01.551 [2024-11-26 20:01:02.213581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x885610 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.213593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dcc0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.213710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96d850 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.213723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:23:01.551 [2024-11-26 20:01:02.213732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.551 [2024-11-26 20:01:02.213739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.551 [2024-11-26 20:01:02.213748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.551 [2024-11-26 20:01:02.213757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.551 [2024-11-26 20:01:02.213766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.551 [2024-11-26 20:01:02.213772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.551 [2024-11-26 20:01:02.213779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.551 [2024-11-26 20:01:02.213786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.551 [2024-11-26 20:01:02.213838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.551 [2024-11-26 20:01:02.213846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.551 [2024-11-26 20:01:02.213853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.551 [2024-11-26 20:01:02.213859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.551 [2024-11-26 20:01:02.213867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:01.551 [2024-11-26 20:01:02.213873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:01.551 [2024-11-26 20:01:02.213880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:01.551 [2024-11-26 20:01:02.213887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.551 [2024-11-26 20:01:02.216174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.216196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.216213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.216221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.216231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.216239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.216248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.551 [2024-11-26 20:01:02.216255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.551 [2024-11-26 20:01:02.216265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.552 [2024-11-26 20:01:02.216710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.552 [2024-11-26 20:01:02.216717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.216988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.553 [2024-11-26 20:01:02.217224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.553 [2024-11-26 20:01:02.217231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.217240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.217248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.217257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.217264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.217276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.217292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd636d0 is same with the state(6) to be set 00:23:01.554 [2024-11-26 20:01:02.218587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.218982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.218992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.554 [2024-11-26 20:01:02.219103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.554 [2024-11-26 20:01:02.219114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.555 [2024-11-26 20:01:02.219675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.555 [2024-11-26 20:01:02.219685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.219693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.219702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.219710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.219727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.219735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd706e0 is same with the state(6) to be set 00:23:01.556 [2024-11-26 20:01:02.221011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.556 [2024-11-26 20:01:02.221506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.556 [2024-11-26 20:01:02.221516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.557 [2024-11-26 20:01:02.221983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.557 [2024-11-26 20:01:02.221992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.221999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.222009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.222016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.222025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.222033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.222041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd719a0 is same with the state(6) to be set 00:23:01.558 [2024-11-26 20:01:02.223294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.558 [2024-11-26 20:01:02.223797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.558 [2024-11-26 20:01:02.223805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.223984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.223991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.559 [2024-11-26 20:01:02.224336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.559 [2024-11-26 20:01:02.224346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.224353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.224363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.224370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.224380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.224389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.224398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.224406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.224414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd72c60 is same with the state(6) to be set 00:23:01.560 [2024-11-26 20:01:02.225698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.225984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.225994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.560 [2024-11-26 20:01:02.226193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.560 [2024-11-26 20:01:02.226202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.561 [2024-11-26 20:01:02.226739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.561 [2024-11-26 20:01:02.226749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.226758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.226768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.226775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.226784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.226792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.226801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.226809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.226817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17d90 is same with the state(6) to be set 00:23:01.562 [2024-11-26 20:01:02.228084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.562 [2024-11-26 20:01:02.228556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.562 [2024-11-26 20:01:02.228563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.563 [2024-11-26 20:01:02.228978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.563 [2024-11-26 20:01:02.228986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.228996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.564 [2024-11-26 20:01:02.229194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.564 [2024-11-26 20:01:02.229203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18fd0 is same with the state(6) to be set 00:23:01.564 [2024-11-26 20:01:02.231107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:01.564 [2024-11-26 20:01:02.231133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:01.564 [2024-11-26 20:01:02.231145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:01.564 [2024-11-26 20:01:02.231180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:01.564 [2024-11-26 20:01:02.231267] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:01.564 [2024-11-26 20:01:02.231285] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:01.564 [2024-11-26 20:01:02.248245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:01.564 task offset: 26880 on job bdev=Nvme7n1 fails 00:23:01.564 00:23:01.564 Latency(us) 00:23:01.564 [2024-11-26T19:01:02.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.564 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme1n1 ended in about 0.98 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme1n1 : 0.98 196.42 12.28 65.47 0.00 241681.07 20643.84 248162.99 00:23:01.564 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme2n1 ended in about 0.98 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme2n1 : 0.98 196.19 12.26 65.40 0.00 237119.15 35607.89 232434.35 00:23:01.564 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme3n1 ended in about 0.99 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme3n1 : 0.99 194.23 12.14 64.74 0.00 234687.57 13434.88 256901.12 00:23:01.564 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme4n1 ended in about 0.99 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme4n1 : 0.99 193.76 12.11 64.59 0.00 230425.81 16493.23 253405.87 00:23:01.564 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme5n1 ended in about 0.99 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme5n1 : 0.99 133.91 8.37 59.40 0.00 301383.11 18350.08 267386.88 00:23:01.564 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme6n1 ended in about 1.00 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme6n1 : 1.00 128.56 8.04 64.28 0.00 295918.08 21736.11 260396.37 00:23:01.564 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme7n1 : 0.97 198.63 12.41 66.21 0.00 209794.35 20534.61 255153.49 00:23:01.564 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme8n1 : 0.98 195.92 12.25 65.31 0.00 208261.12 17257.81 253405.87 00:23:01.564 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme9n1 ended in about 1.00 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme9n1 : 1.00 132.26 8.27 64.13 0.00 271818.92 19988.48 270882.13 00:23:01.564 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.564 Job: Nvme10n1 ended in about 1.00 seconds with error 00:23:01.564 Verification LBA range: start 0x0 length 0x400 00:23:01.564 Nvme10n1 : 1.00 191.93 12.00 63.98 0.00 203900.16 15291.73 255153.49 00:23:01.564 [2024-11-26T19:01:02.385Z] =================================================================================================================== 00:23:01.564 [2024-11-26T19:01:02.385Z] Total : 1761.82 110.11 643.51 0.00 239806.43 13434.88 270882.13 00:23:01.564 [2024-11-26 20:01:02.272083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:01.564 [2024-11-26 20:01:02.272116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.564 [2024-11-26 20:01:02.272566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.564 [2024-11-26 20:01:02.272585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96bfc0 with addr=10.0.0.2, port=4420 00:23:01.564 [2024-11-26 20:01:02.272595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96bfc0 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.272925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.272936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd99980 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.272944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99980 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.273285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.273296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd99180 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.273303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd99180 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.273594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.273604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8ec90 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.273611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ec90 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.273636] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.273653] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.273663] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.273676] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.273693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8ec90 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.273707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99180 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.273721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd99980 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.273733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96bfc0 (9): Bad file descriptor 00:23:01.565 1761.82 IOPS, 110.11 MiB/s [2024-11-26T19:01:02.386Z] [2024-11-26 20:01:02.275376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.565 [2024-11-26 20:01:02.275388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:01.565 [2024-11-26 20:01:02.275398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:01.565 [2024-11-26 20:01:02.275406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.565 [2024-11-26 20:01:02.275727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.275741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5bb0 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.275749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5bb0 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.276019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.276030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd7d50 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.276037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd7d50 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.276066] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.276078] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.276089] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.276101] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:01.565 [2024-11-26 20:01:02.276440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.276454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96dcc0 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.276461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96dcc0 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.276647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.276660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x885610 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.276668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885610 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.276983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.276993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8e6a0 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.277000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8e6a0 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.277171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.565 [2024-11-26 20:01:02.277187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96d850 with addr=10.0.0.2, port=4420 00:23:01.565 [2024-11-26 20:01:02.277195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d850 is same with the state(6) to be set 00:23:01.565 [2024-11-26 20:01:02.277205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5bb0 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd7d50 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:01.565 [2024-11-26 20:01:02.277230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:01.565 [2024-11-26 20:01:02.277238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:01.565 [2024-11-26 20:01:02.277247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:01.565 [2024-11-26 20:01:02.277256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:01.565 [2024-11-26 20:01:02.277262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:01.565 [2024-11-26 20:01:02.277269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:01.565 [2024-11-26 20:01:02.277276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.565 [2024-11-26 20:01:02.277283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:01.565 [2024-11-26 20:01:02.277289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:01.565 [2024-11-26 20:01:02.277296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:01.565 [2024-11-26 20:01:02.277304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:01.565 [2024-11-26 20:01:02.277312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:01.565 [2024-11-26 20:01:02.277318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:01.565 [2024-11-26 20:01:02.277325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:01.565 [2024-11-26 20:01:02.277331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:01.565 [2024-11-26 20:01:02.277413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96dcc0 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x885610 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e6a0 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96d850 (9): Bad file descriptor 00:23:01.565 [2024-11-26 20:01:02.277451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:01.565 [2024-11-26 20:01:02.277461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:01.565 [2024-11-26 20:01:02.277469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:01.566 [2024-11-26 20:01:02.277482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.566 [2024-11-26 20:01:02.277489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.566 [2024-11-26 20:01:02.277496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.566 [2024-11-26 20:01:02.277533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.566 [2024-11-26 20:01:02.277540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.566 [2024-11-26 20:01:02.277547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.566 [2024-11-26 20:01:02.277561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.566 [2024-11-26 20:01:02.277567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.566 [2024-11-26 20:01:02.277574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.566 [2024-11-26 20:01:02.277590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:01.566 [2024-11-26 20:01:02.277596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:01.566 [2024-11-26 20:01:02.277603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.566 [2024-11-26 20:01:02.277617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.566 [2024-11-26 20:01:02.277623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.566 [2024-11-26 20:01:02.277630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.566 [2024-11-26 20:01:02.277637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.827 20:01:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3725863 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3725863 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3725863 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:02.770 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.771 rmmod nvme_tcp 00:23:02.771 rmmod nvme_fabrics 00:23:02.771 rmmod nvme_keyring 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3725588 ']' 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3725588 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3725588 ']' 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3725588 00:23:02.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3725588) - No such process 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3725588 is not found' 00:23:02.771 Process with pid 3725588 is not found 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.771 20:01:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.770 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.770 00:23:04.770 real 0m7.794s 00:23:04.770 user 0m19.110s 00:23:04.770 sys 0m1.272s 00:23:04.770 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.770 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 ************************************ 00:23:04.770 END TEST nvmf_shutdown_tc3 00:23:04.770 ************************************ 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.031 ************************************ 00:23:05.031 START TEST nvmf_shutdown_tc4 00:23:05.031 ************************************ 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.031 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:05.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:05.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:05.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:05.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.032 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.293 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.293 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:05.294 00:23:05.294 --- 10.0.0.2 ping statistics --- 00:23:05.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.294 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:05.294 00:23:05.294 --- 10.0.0.1 ping statistics --- 00:23:05.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.294 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.294 20:01:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3727120 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3727120 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3727120 ']' 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.294 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.294 [2024-11-26 20:01:06.089788] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:05.294 [2024-11-26 20:01:06.089853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.555 [2024-11-26 20:01:06.187547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.555 [2024-11-26 20:01:06.221942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.555 [2024-11-26 20:01:06.221972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.555 [2024-11-26 20:01:06.221982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.555 [2024-11-26 20:01:06.221986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.555 [2024-11-26 20:01:06.221991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.555 [2024-11-26 20:01:06.223569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.555 [2024-11-26 20:01:06.223705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.555 [2024-11-26 20:01:06.223857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.555 [2024-11-26 20:01:06.223858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.127 [2024-11-26 20:01:06.935985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.127 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:06.389 20:01:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.389 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.389 Malloc1 00:23:06.389 [2024-11-26 20:01:07.054603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.389 Malloc2 00:23:06.389 Malloc3 00:23:06.389 Malloc4 00:23:06.389 Malloc5 00:23:06.649 Malloc6 00:23:06.649 Malloc7 00:23:06.649 Malloc8 00:23:06.649 Malloc9 00:23:06.649 Malloc10 00:23:06.649 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.649 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:06.649 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.649 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.650 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3727504 00:23:06.650 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:06.650 20:01:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:06.910 [2024-11-26 20:01:07.536844] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:12.206 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.206 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3727120 00:23:12.206 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3727120 ']' 00:23:12.206 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3727120 00:23:12.206 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727120 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727120' 00:23:12.207 killing process with pid 3727120 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3727120 00:23:12.207 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3727120 00:23:12.207 [2024-11-26 20:01:12.534178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544200 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544200 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544200 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544200 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2544200 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266ae20 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.534783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b310 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.535170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2543d30 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.535191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2543d30 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.535197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2543d30 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.536909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266bcb0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c1a0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c1a0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c1a0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c1a0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c1a0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c670 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266c670 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.537810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b7e0 is same with the state(6) to be set 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 [2024-11-26 20:01:12.539588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d4e0 is same with Write completed with error (sct=0, sc=8) 00:23:12.207 the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.539606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d4e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.539611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d4e0 is same with the state(6) to be set 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 [2024-11-26 20:01:12.539615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d4e0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.539621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d4e0 is same with the state(6) to be set 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 [2024-11-26 20:01:12.539982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d9b0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.539997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d9b0 is same with Write completed with error (sct=0, sc=8) 00:23:12.207 the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.540004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d9b0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.540008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d9b0 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.540013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266d9b0 is same with the state(6) to be set 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 Write completed with error (sct=0, sc=8) 00:23:12.207 starting I/O failed: -6 00:23:12.207 [2024-11-26 20:01:12.540136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.207 [2024-11-26 20:01:12.540286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cb40 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.540302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cb40 is same with the state(6) to be set 00:23:12.207 [2024-11-26 20:01:12.540307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cb40 is same with the state(6) to be set 00:23:12.208 [2024-11-26 20:01:12.540311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266cb40 is same with the state(6) to be set 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 [2024-11-26 20:01:12.540966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 [2024-11-26 20:01:12.541861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.208 starting I/O failed: -6 00:23:12.208 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 [2024-11-26 20:01:12.543277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.209 NVMe io qpair process completion error 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 [2024-11-26 20:01:12.544512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 [2024-11-26 20:01:12.545476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 starting I/O failed: -6 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.209 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 [2024-11-26 20:01:12.546399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 [2024-11-26 20:01:12.548270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.210 NVMe io qpair process completion error 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 [2024-11-26 20:01:12.549619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.210 starting I/O failed: -6 00:23:12.210 starting I/O failed: -6 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 starting I/O failed: -6 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.210 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 [2024-11-26 20:01:12.550613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 [2024-11-26 20:01:12.551552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.211 Write completed with error (sct=0, sc=8) 00:23:12.211 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 [2024-11-26 20:01:12.554004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.212 NVMe io qpair process completion error 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 [2024-11-26 20:01:12.555395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 [2024-11-26 20:01:12.556212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 Write completed with error (sct=0, sc=8) 00:23:12.212 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 [2024-11-26 20:01:12.557128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 [2024-11-26 20:01:12.558569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.213 NVMe io qpair process completion error 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 Write completed with error (sct=0, sc=8) 00:23:12.213 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 [2024-11-26 20:01:12.559628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 [2024-11-26 20:01:12.560452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 [2024-11-26 20:01:12.561393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.214 Write completed with error (sct=0, sc=8) 00:23:12.214 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 [2024-11-26 20:01:12.563439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.215 NVMe io qpair process completion error 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 [2024-11-26 20:01:12.564512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.215 [2024-11-26 20:01:12.565324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 Write completed with error (sct=0, sc=8) 00:23:12.215 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 [2024-11-26 20:01:12.566248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.216 Write completed with error (sct=0, sc=8) 00:23:12.216 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 [2024-11-26 20:01:12.568658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.217 NVMe io qpair process completion error 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 [2024-11-26 20:01:12.569758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 [2024-11-26 20:01:12.570577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 Write completed with error (sct=0, sc=8) 00:23:12.217 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 [2024-11-26 20:01:12.571524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 [2024-11-26 20:01:12.573137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.218 NVMe io qpair process completion error 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 Write completed with error (sct=0, sc=8) 00:23:12.218 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 [2024-11-26 20:01:12.574783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 [2024-11-26 20:01:12.575698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.219 Write completed with error (sct=0, sc=8) 00:23:12.219 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 [2024-11-26 20:01:12.578174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.220 NVMe io qpair process completion error 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 [2024-11-26 20:01:12.579427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 [2024-11-26 20:01:12.580482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.220 starting I/O failed: -6 00:23:12.220 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 [2024-11-26 20:01:12.581415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 [2024-11-26 20:01:12.582862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.221 NVMe io qpair process completion error 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 starting I/O failed: -6 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.221 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 [2024-11-26 20:01:12.584731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 [2024-11-26 20:01:12.585648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.222 starting I/O failed: -6 00:23:12.222 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 Write completed with error (sct=0, sc=8) 00:23:12.223 starting I/O failed: -6 00:23:12.223 [2024-11-26 20:01:12.588782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:12.223 NVMe io qpair process completion error 00:23:12.223 Initializing NVMe Controllers 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:12.223 Controller IO queue size 128, less than required. 00:23:12.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:12.223 Initialization complete. Launching workers. 00:23:12.223 ======================================================== 00:23:12.223 Latency(us) 00:23:12.223 Device Information : IOPS MiB/s Average min max 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1900.35 81.66 67378.20 591.51 119279.80 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1899.29 81.61 67450.85 692.02 121770.68 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1876.82 80.64 68276.40 700.42 122291.38 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1904.37 81.83 67314.00 814.12 121659.97 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1895.47 81.45 67661.30 695.07 120972.41 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1880.22 80.79 68225.85 942.96 128807.54 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1904.16 81.82 67405.14 832.01 118995.18 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1892.93 81.34 67821.36 915.36 133222.11 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1863.26 80.06 68221.14 854.61 121723.82 00:23:12.223 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1820.88 78.24 69830.59 592.28 122602.62 00:23:12.223 ======================================================== 00:23:12.223 Total : 18837.75 809.43 67949.29 591.51 133222.11 00:23:12.223 00:23:12.223 [2024-11-26 20:01:12.594629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec740 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19edae0 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb890 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ec410 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ed900 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb560 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ebbc0 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ed720 is same with the state(6) to be set 00:23:12.223 [2024-11-26 20:01:12.594899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ebef0 is same with the state(6) to be set 00:23:12.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:12.224 20:01:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:13.166 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3727504 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3727504 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3727504 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.167 rmmod nvme_tcp 00:23:13.167 rmmod nvme_fabrics 00:23:13.167 rmmod nvme_keyring 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3727120 ']' 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3727120 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3727120 ']' 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3727120 00:23:13.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3727120) - No such process 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3727120 is not found' 00:23:13.167 Process with pid 3727120 is not found 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.167 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.718 00:23:15.718 real 0m10.296s 00:23:15.718 user 0m28.027s 00:23:15.718 sys 0m3.983s 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.718 ************************************ 00:23:15.718 END TEST nvmf_shutdown_tc4 00:23:15.718 ************************************ 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:15.718 00:23:15.718 real 0m42.809s 00:23:15.718 user 1m42.802s 00:23:15.718 sys 0m13.765s 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.718 20:01:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.718 ************************************ 00:23:15.718 END TEST nvmf_shutdown 00:23:15.718 ************************************ 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:15.718 ************************************ 00:23:15.718 START TEST nvmf_nsid 00:23:15.718 ************************************ 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:15.718 * Looking for test storage... 00:23:15.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.718 --rc genhtml_branch_coverage=1 00:23:15.718 --rc genhtml_function_coverage=1 00:23:15.718 --rc genhtml_legend=1 00:23:15.718 --rc geninfo_all_blocks=1 00:23:15.718 --rc geninfo_unexecuted_blocks=1 00:23:15.718 00:23:15.718 ' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.718 --rc genhtml_branch_coverage=1 00:23:15.718 --rc genhtml_function_coverage=1 00:23:15.718 --rc genhtml_legend=1 00:23:15.718 --rc geninfo_all_blocks=1 00:23:15.718 --rc geninfo_unexecuted_blocks=1 00:23:15.718 00:23:15.718 ' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.718 --rc genhtml_branch_coverage=1 00:23:15.718 --rc genhtml_function_coverage=1 00:23:15.718 --rc genhtml_legend=1 00:23:15.718 --rc geninfo_all_blocks=1 00:23:15.718 --rc geninfo_unexecuted_blocks=1 00:23:15.718 00:23:15.718 ' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.718 --rc genhtml_branch_coverage=1 00:23:15.718 --rc genhtml_function_coverage=1 00:23:15.718 --rc genhtml_legend=1 00:23:15.718 --rc geninfo_all_blocks=1 00:23:15.718 --rc geninfo_unexecuted_blocks=1 00:23:15.718 00:23:15.718 ' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.718 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.719 20:01:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.861 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:23:23.862 00:23:23.862 --- 10.0.0.2 ping statistics --- 00:23:23.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.862 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:23:23.862 00:23:23.862 --- 10.0.0.1 ping statistics --- 00:23:23.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.862 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3732850 00:23:23.862 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3732850 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3732850 ']' 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.863 20:01:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 [2024-11-26 20:01:23.903989] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:23.863 [2024-11-26 20:01:23.904057] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.863 [2024-11-26 20:01:24.003983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.863 [2024-11-26 20:01:24.055498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.863 [2024-11-26 20:01:24.055550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.863 [2024-11-26 20:01:24.055559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.863 [2024-11-26 20:01:24.055566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.863 [2024-11-26 20:01:24.055572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.863 [2024-11-26 20:01:24.056380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3733197 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1f323857-e48d-47e3-9e0c-c3d9868781e8 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1921b4ba-bbfe-44df-a261-8fc658f7a5bd 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0dbc1d01-0f0b-467f-9e8f-48a4d3a54add 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:24.124 null0 00:23:24.124 null1 00:23:24.124 null2 00:23:24.124 [2024-11-26 20:01:24.814000] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:24.124 [2024-11-26 20:01:24.814067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733197 ] 00:23:24.124 [2024-11-26 20:01:24.816169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.124 [2024-11-26 20:01:24.840471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3733197 /var/tmp/tgt2.sock 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3733197 ']' 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:24.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.124 20:01:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:24.124 [2024-11-26 20:01:24.905393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.385 [2024-11-26 20:01:24.957305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.644 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.644 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:24.644 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:24.904 [2024-11-26 20:01:25.525493] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.904 [2024-11-26 20:01:25.541680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:24.904 nvme0n1 nvme0n2 00:23:24.904 nvme1n1 00:23:24.904 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:24.905 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:24.905 20:01:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:26.292 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:27.238 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.238 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:27.238 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.238 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1f323857-e48d-47e3-9e0c-c3d9868781e8 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1f323857e48d47e39e0cc3d9868781e8 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1F323857E48D47E39E0CC3D9868781E8 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1F323857E48D47E39E0CC3D9868781E8 == \1\F\3\2\3\8\5\7\E\4\8\D\4\7\E\3\9\E\0\C\C\3\D\9\8\6\8\7\8\1\E\8 ]] 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.499 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1921b4ba-bbfe-44df-a261-8fc658f7a5bd 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1921b4babbfe44dfa2618fc658f7a5bd 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1921B4BABBFE44DFA2618FC658F7A5BD 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1921B4BABBFE44DFA2618FC658F7A5BD == \1\9\2\1\B\4\B\A\B\B\F\E\4\4\D\F\A\2\6\1\8\F\C\6\5\8\F\7\A\5\B\D ]] 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0dbc1d01-0f0b-467f-9e8f-48a4d3a54add 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0dbc1d010f0b467f9e8f48a4d3a54add 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0DBC1D010F0B467F9E8F48A4D3A54ADD 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0DBC1D010F0B467F9E8F48A4D3A54ADD == \0\D\B\C\1\D\0\1\0\F\0\B\4\6\7\F\9\E\8\F\4\8\A\4\D\3\A\5\4\A\D\D ]] 00:23:27.500 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3733197 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3733197 ']' 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3733197 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3733197 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3733197' 00:23:27.760 killing process with pid 3733197 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3733197 00:23:27.760 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3733197 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.021 rmmod nvme_tcp 00:23:28.021 rmmod nvme_fabrics 00:23:28.021 rmmod nvme_keyring 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3732850 ']' 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3732850 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3732850 ']' 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3732850 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.021 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732850 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732850' 00:23:28.281 killing process with pid 3732850 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3732850 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3732850 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:28.281 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:28.282 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.282 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.282 20:01:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.282 20:01:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.282 20:01:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.282 20:01:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.282 20:01:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.827 00:23:30.827 real 0m14.996s 00:23:30.827 user 0m11.493s 00:23:30.827 sys 0m6.882s 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.827 ************************************ 00:23:30.827 END TEST nvmf_nsid 00:23:30.827 ************************************ 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:30.827 00:23:30.827 real 13m11.581s 00:23:30.827 user 27m42.968s 00:23:30.827 sys 3m56.016s 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.827 20:01:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.827 ************************************ 00:23:30.827 END TEST nvmf_target_extra 00:23:30.827 ************************************ 00:23:30.827 20:01:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.827 20:01:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.827 20:01:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.827 20:01:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.827 ************************************ 00:23:30.827 START TEST nvmf_host 00:23:30.827 ************************************ 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.827 * Looking for test storage... 00:23:30.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:30.827 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.828 --rc genhtml_branch_coverage=1 00:23:30.828 --rc genhtml_function_coverage=1 00:23:30.828 --rc genhtml_legend=1 00:23:30.828 --rc geninfo_all_blocks=1 00:23:30.828 --rc geninfo_unexecuted_blocks=1 00:23:30.828 00:23:30.828 ' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.828 --rc genhtml_branch_coverage=1 00:23:30.828 --rc genhtml_function_coverage=1 00:23:30.828 --rc genhtml_legend=1 00:23:30.828 --rc geninfo_all_blocks=1 00:23:30.828 --rc geninfo_unexecuted_blocks=1 00:23:30.828 00:23:30.828 ' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.828 --rc genhtml_branch_coverage=1 00:23:30.828 --rc genhtml_function_coverage=1 00:23:30.828 --rc genhtml_legend=1 00:23:30.828 --rc geninfo_all_blocks=1 00:23:30.828 --rc geninfo_unexecuted_blocks=1 00:23:30.828 00:23:30.828 ' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.828 --rc genhtml_branch_coverage=1 00:23:30.828 --rc genhtml_function_coverage=1 00:23:30.828 --rc genhtml_legend=1 00:23:30.828 --rc geninfo_all_blocks=1 00:23:30.828 --rc geninfo_unexecuted_blocks=1 00:23:30.828 00:23:30.828 ' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.828 ************************************ 00:23:30.828 START TEST nvmf_multicontroller 00:23:30.828 ************************************ 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.828 * Looking for test storage... 00:23:30.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.828 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:31.088 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:31.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.089 --rc genhtml_branch_coverage=1 00:23:31.089 --rc genhtml_function_coverage=1 00:23:31.089 --rc genhtml_legend=1 00:23:31.089 --rc geninfo_all_blocks=1 00:23:31.089 --rc geninfo_unexecuted_blocks=1 00:23:31.089 00:23:31.089 ' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:31.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.089 --rc genhtml_branch_coverage=1 00:23:31.089 --rc genhtml_function_coverage=1 00:23:31.089 --rc genhtml_legend=1 00:23:31.089 --rc geninfo_all_blocks=1 00:23:31.089 --rc geninfo_unexecuted_blocks=1 00:23:31.089 00:23:31.089 ' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:31.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.089 --rc genhtml_branch_coverage=1 00:23:31.089 --rc genhtml_function_coverage=1 00:23:31.089 --rc genhtml_legend=1 00:23:31.089 --rc geninfo_all_blocks=1 00:23:31.089 --rc geninfo_unexecuted_blocks=1 00:23:31.089 00:23:31.089 ' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:31.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.089 --rc genhtml_branch_coverage=1 00:23:31.089 --rc genhtml_function_coverage=1 00:23:31.089 --rc genhtml_legend=1 00:23:31.089 --rc geninfo_all_blocks=1 00:23:31.089 --rc geninfo_unexecuted_blocks=1 00:23:31.089 00:23:31.089 ' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:31.089 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.090 20:01:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.260 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:39.261 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:39.261 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:39.261 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:39.261 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.261 20:01:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:23:39.261 00:23:39.261 --- 10.0.0.2 ping statistics --- 00:23:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.261 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:39.261 00:23:39.261 --- 10.0.0.1 ping statistics --- 00:23:39.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.261 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3738298 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3738298 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3738298 ']' 00:23:39.261 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.262 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.262 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.262 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.262 20:01:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.262 [2024-11-26 20:01:39.304877] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:39.262 [2024-11-26 20:01:39.304961] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.262 [2024-11-26 20:01:39.403298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:39.262 [2024-11-26 20:01:39.455261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.262 [2024-11-26 20:01:39.455310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.262 [2024-11-26 20:01:39.455319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.262 [2024-11-26 20:01:39.455330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.262 [2024-11-26 20:01:39.455336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.262 [2024-11-26 20:01:39.457148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.262 [2024-11-26 20:01:39.457313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.262 [2024-11-26 20:01:39.457441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.522 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.522 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:39.522 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.522 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.522 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 [2024-11-26 20:01:40.174416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 Malloc0 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 [2024-11-26 20:01:40.259116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 [2024-11-26 20:01:40.270969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 Malloc1 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.523 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3738381 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3738381 /var/tmp/bdevperf.sock 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3738381 ']' 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.783 20:01:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.725 NVMe0n1 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.725 1 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.725 request: 00:23:40.725 { 00:23:40.725 "name": "NVMe0", 00:23:40.725 "trtype": "tcp", 00:23:40.725 "traddr": "10.0.0.2", 00:23:40.725 "adrfam": "ipv4", 00:23:40.725 "trsvcid": "4420", 00:23:40.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.725 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:40.725 "hostaddr": "10.0.0.1", 00:23:40.725 "prchk_reftag": false, 00:23:40.725 "prchk_guard": false, 00:23:40.725 "hdgst": false, 00:23:40.725 "ddgst": false, 00:23:40.725 "allow_unrecognized_csi": false, 00:23:40.725 "method": "bdev_nvme_attach_controller", 00:23:40.725 "req_id": 1 00:23:40.725 } 00:23:40.725 Got JSON-RPC error response 00:23:40.725 response: 00:23:40.725 { 00:23:40.725 "code": -114, 00:23:40.725 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.725 } 00:23:40.725 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.726 request: 00:23:40.726 { 00:23:40.726 "name": "NVMe0", 00:23:40.726 "trtype": "tcp", 00:23:40.726 "traddr": "10.0.0.2", 00:23:40.726 "adrfam": "ipv4", 00:23:40.726 "trsvcid": "4420", 00:23:40.726 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.726 "hostaddr": "10.0.0.1", 00:23:40.726 "prchk_reftag": false, 00:23:40.726 "prchk_guard": false, 00:23:40.726 "hdgst": false, 00:23:40.726 "ddgst": false, 00:23:40.726 "allow_unrecognized_csi": false, 00:23:40.726 "method": "bdev_nvme_attach_controller", 00:23:40.726 "req_id": 1 00:23:40.726 } 00:23:40.726 Got JSON-RPC error response 00:23:40.726 response: 00:23:40.726 { 00:23:40.726 "code": -114, 00:23:40.726 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.726 } 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.726 request: 00:23:40.726 { 00:23:40.726 "name": "NVMe0", 00:23:40.726 "trtype": "tcp", 00:23:40.726 "traddr": "10.0.0.2", 00:23:40.726 "adrfam": "ipv4", 00:23:40.726 "trsvcid": "4420", 00:23:40.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.726 "hostaddr": "10.0.0.1", 00:23:40.726 "prchk_reftag": false, 00:23:40.726 "prchk_guard": false, 00:23:40.726 "hdgst": false, 00:23:40.726 "ddgst": false, 00:23:40.726 "multipath": "disable", 00:23:40.726 "allow_unrecognized_csi": false, 00:23:40.726 "method": "bdev_nvme_attach_controller", 00:23:40.726 "req_id": 1 00:23:40.726 } 00:23:40.726 Got JSON-RPC error response 00:23:40.726 response: 00:23:40.726 { 00:23:40.726 "code": -114, 00:23:40.726 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:40.726 } 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.726 request: 00:23:40.726 { 00:23:40.726 "name": "NVMe0", 00:23:40.726 "trtype": "tcp", 00:23:40.726 "traddr": "10.0.0.2", 00:23:40.726 "adrfam": "ipv4", 00:23:40.726 "trsvcid": "4420", 00:23:40.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.726 "hostaddr": "10.0.0.1", 00:23:40.726 "prchk_reftag": false, 00:23:40.726 "prchk_guard": false, 00:23:40.726 "hdgst": false, 00:23:40.726 "ddgst": false, 00:23:40.726 "multipath": "failover", 00:23:40.726 "allow_unrecognized_csi": false, 00:23:40.726 "method": "bdev_nvme_attach_controller", 00:23:40.726 "req_id": 1 00:23:40.726 } 00:23:40.726 Got JSON-RPC error response 00:23:40.726 response: 00:23:40.726 { 00:23:40.726 "code": -114, 00:23:40.726 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.726 } 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.726 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.987 NVMe0n1 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.987 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:40.987 20:01:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.373 { 00:23:42.373 "results": [ 00:23:42.373 { 00:23:42.373 "job": "NVMe0n1", 00:23:42.373 "core_mask": "0x1", 00:23:42.373 "workload": "write", 00:23:42.373 "status": "finished", 00:23:42.373 "queue_depth": 128, 00:23:42.373 "io_size": 4096, 00:23:42.373 "runtime": 1.008752, 00:23:42.373 "iops": 19925.611052072265, 00:23:42.373 "mibps": 77.83441817215729, 00:23:42.373 "io_failed": 0, 00:23:42.373 "io_timeout": 0, 00:23:42.373 "avg_latency_us": 6413.29535787728, 00:23:42.373 "min_latency_us": 3768.32, 00:23:42.373 "max_latency_us": 14964.053333333333 00:23:42.373 } 00:23:42.373 ], 00:23:42.373 "core_count": 1 00:23:42.373 } 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3738381 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3738381 ']' 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3738381 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738381 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738381' 00:23:42.373 killing process with pid 3738381 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3738381 00:23:42.373 20:01:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3738381 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:42.373 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:42.373 [2024-11-26 20:01:40.402051] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:42.373 [2024-11-26 20:01:40.402122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738381 ] 00:23:42.373 [2024-11-26 20:01:40.492738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.373 [2024-11-26 20:01:40.546842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.373 [2024-11-26 20:01:41.695495] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 180a08ec-7f59-4442-b9a6-8e4ff0250df2 already exists 00:23:42.373 [2024-11-26 20:01:41.695523] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:180a08ec-7f59-4442-b9a6-8e4ff0250df2 alias for bdev NVMe1n1 00:23:42.373 [2024-11-26 20:01:41.695532] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:42.373 Running I/O for 1 seconds... 00:23:42.373 19907.00 IOPS, 77.76 MiB/s 00:23:42.373 Latency(us) 00:23:42.373 [2024-11-26T19:01:43.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.373 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:42.373 NVMe0n1 : 1.01 19925.61 77.83 0.00 0.00 6413.30 3768.32 14964.05 00:23:42.373 [2024-11-26T19:01:43.194Z] =================================================================================================================== 00:23:42.373 [2024-11-26T19:01:43.194Z] Total : 19925.61 77.83 0.00 0.00 6413.30 3768.32 14964.05 00:23:42.373 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.373 00:23:42.373 Latency(us) 00:23:42.373 [2024-11-26T19:01:43.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.373 [2024-11-26T19:01:43.194Z] =================================================================================================================== 00:23:42.373 [2024-11-26T19:01:43.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.373 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.373 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.374 rmmod nvme_tcp 00:23:42.374 rmmod nvme_fabrics 00:23:42.374 rmmod nvme_keyring 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3738298 ']' 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3738298 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3738298 ']' 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3738298 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.374 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738298 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738298' 00:23:42.634 killing process with pid 3738298 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3738298 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3738298 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.634 20:01:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.180 00:23:45.180 real 0m13.974s 00:23:45.180 user 0m17.027s 00:23:45.180 sys 0m6.482s 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.180 ************************************ 00:23:45.180 END TEST nvmf_multicontroller 00:23:45.180 ************************************ 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.180 ************************************ 00:23:45.180 START TEST nvmf_aer 00:23:45.180 ************************************ 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:45.180 * Looking for test storage... 00:23:45.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:45.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.180 --rc genhtml_branch_coverage=1 00:23:45.180 --rc genhtml_function_coverage=1 00:23:45.180 --rc genhtml_legend=1 00:23:45.180 --rc geninfo_all_blocks=1 00:23:45.180 --rc geninfo_unexecuted_blocks=1 00:23:45.180 00:23:45.180 ' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:45.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.180 --rc genhtml_branch_coverage=1 00:23:45.180 --rc genhtml_function_coverage=1 00:23:45.180 --rc genhtml_legend=1 00:23:45.180 --rc geninfo_all_blocks=1 00:23:45.180 --rc geninfo_unexecuted_blocks=1 00:23:45.180 00:23:45.180 ' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:45.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.180 --rc genhtml_branch_coverage=1 00:23:45.180 --rc genhtml_function_coverage=1 00:23:45.180 --rc genhtml_legend=1 00:23:45.180 --rc geninfo_all_blocks=1 00:23:45.180 --rc geninfo_unexecuted_blocks=1 00:23:45.180 00:23:45.180 ' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:45.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.180 --rc genhtml_branch_coverage=1 00:23:45.180 --rc genhtml_function_coverage=1 00:23:45.180 --rc genhtml_legend=1 00:23:45.180 --rc geninfo_all_blocks=1 00:23:45.180 --rc geninfo_unexecuted_blocks=1 00:23:45.180 00:23:45.180 ' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.180 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.181 20:01:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.326 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.326 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.326 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.326 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.326 20:01:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.326 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:23:53.327 00:23:53.327 --- 10.0.0.2 ping statistics --- 00:23:53.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.327 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:53.327 00:23:53.327 --- 10.0.0.1 ping statistics --- 00:23:53.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.327 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3743200 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3743200 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3743200 ']' 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.327 20:01:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.327 [2024-11-26 20:01:53.360382] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:23:53.327 [2024-11-26 20:01:53.360451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.327 [2024-11-26 20:01:53.463213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.327 [2024-11-26 20:01:53.517398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.327 [2024-11-26 20:01:53.517455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.327 [2024-11-26 20:01:53.517464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.327 [2024-11-26 20:01:53.517471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.327 [2024-11-26 20:01:53.517477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.327 [2024-11-26 20:01:53.519608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.327 [2024-11-26 20:01:53.519769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.327 [2024-11-26 20:01:53.519931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.327 [2024-11-26 20:01:53.519932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 [2024-11-26 20:01:54.241700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 Malloc0 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 [2024-11-26 20:01:54.318097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.589 [ 00:23:53.589 { 00:23:53.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.589 "subtype": "Discovery", 00:23:53.589 "listen_addresses": [], 00:23:53.589 "allow_any_host": true, 00:23:53.589 "hosts": [] 00:23:53.589 }, 00:23:53.589 { 00:23:53.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.589 "subtype": "NVMe", 00:23:53.589 "listen_addresses": [ 00:23:53.589 { 00:23:53.589 "trtype": "TCP", 00:23:53.589 "adrfam": "IPv4", 00:23:53.589 "traddr": "10.0.0.2", 00:23:53.589 "trsvcid": "4420" 00:23:53.589 } 00:23:53.589 ], 00:23:53.589 "allow_any_host": true, 00:23:53.589 "hosts": [], 00:23:53.589 "serial_number": "SPDK00000000000001", 00:23:53.589 "model_number": "SPDK bdev Controller", 00:23:53.589 "max_namespaces": 2, 00:23:53.589 "min_cntlid": 1, 00:23:53.589 "max_cntlid": 65519, 00:23:53.589 "namespaces": [ 00:23:53.589 { 00:23:53.589 "nsid": 1, 00:23:53.589 "bdev_name": "Malloc0", 00:23:53.589 "name": "Malloc0", 00:23:53.589 "nguid": "B3A99A2E4BD14F99AA4CC6419E5EADE5", 00:23:53.589 "uuid": "b3a99a2e-4bd1-4f99-aa4c-c6419e5eade5" 00:23:53.589 } 00:23:53.589 ] 00:23:53.589 } 00:23:53.589 ] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3743367 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:53.589 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.851 Malloc1 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.851 Asynchronous Event Request test 00:23:53.851 Attaching to 10.0.0.2 00:23:53.851 Attached to 10.0.0.2 00:23:53.851 Registering asynchronous event callbacks... 00:23:53.851 Starting namespace attribute notice tests for all controllers... 00:23:53.851 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:53.851 aer_cb - Changed Namespace 00:23:53.851 Cleaning up... 00:23:53.851 [ 00:23:53.851 { 00:23:53.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.851 "subtype": "Discovery", 00:23:53.851 "listen_addresses": [], 00:23:53.851 "allow_any_host": true, 00:23:53.851 "hosts": [] 00:23:53.851 }, 00:23:53.851 { 00:23:53.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.851 "subtype": "NVMe", 00:23:53.851 "listen_addresses": [ 00:23:53.851 { 00:23:53.851 "trtype": "TCP", 00:23:53.851 "adrfam": "IPv4", 00:23:53.851 "traddr": "10.0.0.2", 00:23:53.851 "trsvcid": "4420" 00:23:53.851 } 00:23:53.851 ], 00:23:53.851 "allow_any_host": true, 00:23:53.851 "hosts": [], 00:23:53.851 "serial_number": "SPDK00000000000001", 00:23:53.851 "model_number": "SPDK bdev Controller", 00:23:53.851 "max_namespaces": 2, 00:23:53.851 "min_cntlid": 1, 00:23:53.851 "max_cntlid": 65519, 00:23:53.851 "namespaces": [ 00:23:53.851 { 00:23:53.851 "nsid": 1, 00:23:53.851 "bdev_name": "Malloc0", 00:23:53.851 "name": "Malloc0", 00:23:53.851 "nguid": "B3A99A2E4BD14F99AA4CC6419E5EADE5", 00:23:53.851 "uuid": "b3a99a2e-4bd1-4f99-aa4c-c6419e5eade5" 00:23:53.851 }, 00:23:53.851 { 00:23:53.851 "nsid": 2, 00:23:53.851 "bdev_name": "Malloc1", 00:23:53.851 "name": "Malloc1", 00:23:53.851 "nguid": "187BB00CCB3641C2A9205FA71579DB8C", 00:23:53.851 "uuid": "187bb00c-cb36-41c2-a920-5fa71579db8c" 00:23:53.851 } 00:23:53.851 ] 00:23:53.851 } 00:23:53.851 ] 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3743367 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.851 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:53.852 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.852 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.113 rmmod nvme_tcp 00:23:54.113 rmmod nvme_fabrics 00:23:54.113 rmmod nvme_keyring 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3743200 ']' 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3743200 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3743200 ']' 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3743200 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3743200 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3743200' 00:23:54.113 killing process with pid 3743200 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3743200 00:23:54.113 20:01:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3743200 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.375 20:01:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.289 20:01:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.550 00:23:56.550 real 0m11.570s 00:23:56.550 user 0m8.242s 00:23:56.550 sys 0m6.200s 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.550 ************************************ 00:23:56.550 END TEST nvmf_aer 00:23:56.550 ************************************ 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.550 ************************************ 00:23:56.550 START TEST nvmf_async_init 00:23:56.550 ************************************ 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.550 * Looking for test storage... 00:23:56.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:56.550 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.812 --rc genhtml_branch_coverage=1 00:23:56.812 --rc genhtml_function_coverage=1 00:23:56.812 --rc genhtml_legend=1 00:23:56.812 --rc geninfo_all_blocks=1 00:23:56.812 --rc geninfo_unexecuted_blocks=1 00:23:56.812 00:23:56.812 ' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.812 --rc genhtml_branch_coverage=1 00:23:56.812 --rc genhtml_function_coverage=1 00:23:56.812 --rc genhtml_legend=1 00:23:56.812 --rc geninfo_all_blocks=1 00:23:56.812 --rc geninfo_unexecuted_blocks=1 00:23:56.812 00:23:56.812 ' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.812 --rc genhtml_branch_coverage=1 00:23:56.812 --rc genhtml_function_coverage=1 00:23:56.812 --rc genhtml_legend=1 00:23:56.812 --rc geninfo_all_blocks=1 00:23:56.812 --rc geninfo_unexecuted_blocks=1 00:23:56.812 00:23:56.812 ' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.812 --rc genhtml_branch_coverage=1 00:23:56.812 --rc genhtml_function_coverage=1 00:23:56.812 --rc genhtml_legend=1 00:23:56.812 --rc geninfo_all_blocks=1 00:23:56.812 --rc geninfo_unexecuted_blocks=1 00:23:56.812 00:23:56.812 ' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:56.812 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e60e5914d95b45c79fa95665514d0f03 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.813 20:01:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.956 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.956 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:24:04.957 00:24:04.957 --- 10.0.0.2 ping statistics --- 00:24:04.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.957 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:24:04.957 00:24:04.957 --- 10.0.0.1 ping statistics --- 00:24:04.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.957 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3747699 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3747699 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3747699 ']' 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.957 20:02:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.957 [2024-11-26 20:02:05.025403] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:04.957 [2024-11-26 20:02:05.025470] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.957 [2024-11-26 20:02:05.125480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.957 [2024-11-26 20:02:05.176067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.957 [2024-11-26 20:02:05.176118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.957 [2024-11-26 20:02:05.176127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.957 [2024-11-26 20:02:05.176134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.957 [2024-11-26 20:02:05.176139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.957 [2024-11-26 20:02:05.176918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 [2024-11-26 20:02:05.888427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 null0 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e60e5914d95b45c79fa95665514d0f03 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.218 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.219 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.219 [2024-11-26 20:02:05.948788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.219 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.219 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:05.219 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.219 20:02:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.479 nvme0n1 00:24:05.479 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.479 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.479 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.480 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.480 [ 00:24:05.480 { 00:24:05.480 "name": "nvme0n1", 00:24:05.480 "aliases": [ 00:24:05.480 "e60e5914-d95b-45c7-9fa9-5665514d0f03" 00:24:05.480 ], 00:24:05.480 "product_name": "NVMe disk", 00:24:05.480 "block_size": 512, 00:24:05.480 "num_blocks": 2097152, 00:24:05.480 "uuid": "e60e5914-d95b-45c7-9fa9-5665514d0f03", 00:24:05.480 "numa_id": 0, 00:24:05.480 "assigned_rate_limits": { 00:24:05.480 "rw_ios_per_sec": 0, 00:24:05.480 "rw_mbytes_per_sec": 0, 00:24:05.480 "r_mbytes_per_sec": 0, 00:24:05.480 "w_mbytes_per_sec": 0 00:24:05.480 }, 00:24:05.480 "claimed": false, 00:24:05.480 "zoned": false, 00:24:05.480 "supported_io_types": { 00:24:05.480 "read": true, 00:24:05.480 "write": true, 00:24:05.480 "unmap": false, 00:24:05.480 "flush": true, 00:24:05.480 "reset": true, 00:24:05.480 "nvme_admin": true, 00:24:05.480 "nvme_io": true, 00:24:05.480 "nvme_io_md": false, 00:24:05.480 "write_zeroes": true, 00:24:05.480 "zcopy": false, 00:24:05.480 "get_zone_info": false, 00:24:05.480 "zone_management": false, 00:24:05.480 "zone_append": false, 00:24:05.480 "compare": true, 00:24:05.480 "compare_and_write": true, 00:24:05.480 "abort": true, 00:24:05.480 "seek_hole": false, 00:24:05.480 "seek_data": false, 00:24:05.480 "copy": true, 00:24:05.480 "nvme_iov_md": false 00:24:05.480 }, 00:24:05.480 "memory_domains": [ 00:24:05.480 { 00:24:05.480 "dma_device_id": "system", 00:24:05.480 "dma_device_type": 1 00:24:05.480 } 00:24:05.480 ], 00:24:05.480 "driver_specific": { 00:24:05.480 "nvme": [ 00:24:05.480 { 00:24:05.480 "trid": { 00:24:05.480 "trtype": "TCP", 00:24:05.480 "adrfam": "IPv4", 00:24:05.480 "traddr": "10.0.0.2", 00:24:05.480 "trsvcid": "4420", 00:24:05.480 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.480 }, 00:24:05.480 "ctrlr_data": { 00:24:05.480 "cntlid": 1, 00:24:05.480 "vendor_id": "0x8086", 00:24:05.480 "model_number": "SPDK bdev Controller", 00:24:05.480 "serial_number": "00000000000000000000", 00:24:05.480 "firmware_revision": "25.01", 00:24:05.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.480 "oacs": { 00:24:05.480 "security": 0, 00:24:05.480 "format": 0, 00:24:05.480 "firmware": 0, 00:24:05.480 "ns_manage": 0 00:24:05.480 }, 00:24:05.480 "multi_ctrlr": true, 00:24:05.480 "ana_reporting": false 00:24:05.480 }, 00:24:05.480 "vs": { 00:24:05.480 "nvme_version": "1.3" 00:24:05.480 }, 00:24:05.480 "ns_data": { 00:24:05.480 "id": 1, 00:24:05.480 "can_share": true 00:24:05.480 } 00:24:05.480 } 00:24:05.480 ], 00:24:05.480 "mp_policy": "active_passive" 00:24:05.480 } 00:24:05.480 } 00:24:05.480 ] 00:24:05.480 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.480 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:05.480 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.480 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.480 [2024-11-26 20:02:06.225345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:05.480 [2024-11-26 20:02:06.225428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2482ce0 (9): Bad file descriptor 00:24:05.741 [2024-11-26 20:02:06.357273] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:05.741 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.741 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.741 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.741 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.741 [ 00:24:05.741 { 00:24:05.741 "name": "nvme0n1", 00:24:05.741 "aliases": [ 00:24:05.741 "e60e5914-d95b-45c7-9fa9-5665514d0f03" 00:24:05.741 ], 00:24:05.741 "product_name": "NVMe disk", 00:24:05.741 "block_size": 512, 00:24:05.741 "num_blocks": 2097152, 00:24:05.741 "uuid": "e60e5914-d95b-45c7-9fa9-5665514d0f03", 00:24:05.741 "numa_id": 0, 00:24:05.741 "assigned_rate_limits": { 00:24:05.741 "rw_ios_per_sec": 0, 00:24:05.741 "rw_mbytes_per_sec": 0, 00:24:05.741 "r_mbytes_per_sec": 0, 00:24:05.741 "w_mbytes_per_sec": 0 00:24:05.741 }, 00:24:05.741 "claimed": false, 00:24:05.741 "zoned": false, 00:24:05.741 "supported_io_types": { 00:24:05.741 "read": true, 00:24:05.741 "write": true, 00:24:05.741 "unmap": false, 00:24:05.741 "flush": true, 00:24:05.741 "reset": true, 00:24:05.741 "nvme_admin": true, 00:24:05.741 "nvme_io": true, 00:24:05.741 "nvme_io_md": false, 00:24:05.741 "write_zeroes": true, 00:24:05.741 "zcopy": false, 00:24:05.741 "get_zone_info": false, 00:24:05.741 "zone_management": false, 00:24:05.741 "zone_append": false, 00:24:05.741 "compare": true, 00:24:05.741 "compare_and_write": true, 00:24:05.741 "abort": true, 00:24:05.741 "seek_hole": false, 00:24:05.741 "seek_data": false, 00:24:05.741 "copy": true, 00:24:05.741 "nvme_iov_md": false 00:24:05.741 }, 00:24:05.741 "memory_domains": [ 00:24:05.741 { 00:24:05.741 "dma_device_id": "system", 00:24:05.741 "dma_device_type": 1 00:24:05.741 } 00:24:05.741 ], 00:24:05.741 "driver_specific": { 00:24:05.741 "nvme": [ 00:24:05.741 { 00:24:05.741 "trid": { 00:24:05.741 "trtype": "TCP", 00:24:05.741 "adrfam": "IPv4", 00:24:05.741 "traddr": "10.0.0.2", 00:24:05.742 "trsvcid": "4420", 00:24:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.742 }, 00:24:05.742 "ctrlr_data": { 00:24:05.742 "cntlid": 2, 00:24:05.742 "vendor_id": "0x8086", 00:24:05.742 "model_number": "SPDK bdev Controller", 00:24:05.742 "serial_number": "00000000000000000000", 00:24:05.742 "firmware_revision": "25.01", 00:24:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.742 "oacs": { 00:24:05.742 "security": 0, 00:24:05.742 "format": 0, 00:24:05.742 "firmware": 0, 00:24:05.742 "ns_manage": 0 00:24:05.742 }, 00:24:05.742 "multi_ctrlr": true, 00:24:05.742 "ana_reporting": false 00:24:05.742 }, 00:24:05.742 "vs": { 00:24:05.742 "nvme_version": "1.3" 00:24:05.742 }, 00:24:05.742 "ns_data": { 00:24:05.742 "id": 1, 00:24:05.742 "can_share": true 00:24:05.742 } 00:24:05.742 } 00:24:05.742 ], 00:24:05.742 "mp_policy": "active_passive" 00:24:05.742 } 00:24:05.742 } 00:24:05.742 ] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.B22Tw1dRnA 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.B22Tw1dRnA 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.B22Tw1dRnA 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 [2024-11-26 20:02:06.446031] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.742 [2024-11-26 20:02:06.446216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 [2024-11-26 20:02:06.470105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.742 nvme0n1 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.742 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.742 [ 00:24:05.742 { 00:24:05.742 "name": "nvme0n1", 00:24:05.742 "aliases": [ 00:24:05.742 "e60e5914-d95b-45c7-9fa9-5665514d0f03" 00:24:05.742 ], 00:24:05.742 "product_name": "NVMe disk", 00:24:05.742 "block_size": 512, 00:24:05.742 "num_blocks": 2097152, 00:24:05.742 "uuid": "e60e5914-d95b-45c7-9fa9-5665514d0f03", 00:24:05.742 "numa_id": 0, 00:24:05.742 "assigned_rate_limits": { 00:24:05.742 "rw_ios_per_sec": 0, 00:24:05.742 "rw_mbytes_per_sec": 0, 00:24:05.742 "r_mbytes_per_sec": 0, 00:24:05.742 "w_mbytes_per_sec": 0 00:24:05.742 }, 00:24:05.742 "claimed": false, 00:24:05.742 "zoned": false, 00:24:05.742 "supported_io_types": { 00:24:05.742 "read": true, 00:24:05.742 "write": true, 00:24:05.742 "unmap": false, 00:24:05.742 "flush": true, 00:24:05.742 "reset": true, 00:24:05.742 "nvme_admin": true, 00:24:05.742 "nvme_io": true, 00:24:05.742 "nvme_io_md": false, 00:24:05.742 "write_zeroes": true, 00:24:05.742 "zcopy": false, 00:24:05.742 "get_zone_info": false, 00:24:05.742 "zone_management": false, 00:24:05.742 "zone_append": false, 00:24:05.742 "compare": true, 00:24:05.742 "compare_and_write": true, 00:24:05.742 "abort": true, 00:24:05.742 "seek_hole": false, 00:24:05.742 "seek_data": false, 00:24:05.742 "copy": true, 00:24:05.742 "nvme_iov_md": false 00:24:05.742 }, 00:24:05.742 "memory_domains": [ 00:24:05.742 { 00:24:05.742 "dma_device_id": "system", 00:24:05.742 "dma_device_type": 1 00:24:05.742 } 00:24:05.742 ], 00:24:05.742 "driver_specific": { 00:24:05.742 "nvme": [ 00:24:05.742 { 00:24:05.742 "trid": { 00:24:05.742 "trtype": "TCP", 00:24:05.742 "adrfam": "IPv4", 00:24:05.742 "traddr": "10.0.0.2", 00:24:05.742 "trsvcid": "4421", 00:24:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.742 }, 00:24:05.742 "ctrlr_data": { 00:24:05.742 "cntlid": 3, 00:24:05.742 "vendor_id": "0x8086", 00:24:05.742 "model_number": "SPDK bdev Controller", 00:24:05.742 "serial_number": "00000000000000000000", 00:24:05.742 "firmware_revision": "25.01", 00:24:05.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.742 "oacs": { 00:24:05.742 "security": 0, 00:24:05.742 "format": 0, 00:24:05.742 "firmware": 0, 00:24:05.742 "ns_manage": 0 00:24:06.003 }, 00:24:06.003 "multi_ctrlr": true, 00:24:06.003 "ana_reporting": false 00:24:06.003 }, 00:24:06.003 "vs": { 00:24:06.003 "nvme_version": "1.3" 00:24:06.003 }, 00:24:06.003 "ns_data": { 00:24:06.003 "id": 1, 00:24:06.003 "can_share": true 00:24:06.003 } 00:24:06.003 } 00:24:06.003 ], 00:24:06.003 "mp_policy": "active_passive" 00:24:06.003 } 00:24:06.003 } 00:24:06.003 ] 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.B22Tw1dRnA 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.003 rmmod nvme_tcp 00:24:06.003 rmmod nvme_fabrics 00:24:06.003 rmmod nvme_keyring 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3747699 ']' 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3747699 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3747699 ']' 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3747699 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747699 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747699' 00:24:06.003 killing process with pid 3747699 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3747699 00:24:06.003 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3747699 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.264 20:02:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.178 20:02:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.178 00:24:08.178 real 0m11.770s 00:24:08.178 user 0m4.225s 00:24:08.178 sys 0m6.118s 00:24:08.178 20:02:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.178 20:02:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.178 ************************************ 00:24:08.178 END TEST nvmf_async_init 00:24:08.178 ************************************ 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.440 ************************************ 00:24:08.440 START TEST dma 00:24:08.440 ************************************ 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.440 * Looking for test storage... 00:24:08.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.440 --rc genhtml_branch_coverage=1 00:24:08.440 --rc genhtml_function_coverage=1 00:24:08.440 --rc genhtml_legend=1 00:24:08.440 --rc geninfo_all_blocks=1 00:24:08.440 --rc geninfo_unexecuted_blocks=1 00:24:08.440 00:24:08.440 ' 00:24:08.440 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.440 --rc genhtml_branch_coverage=1 00:24:08.440 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 20:02:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.441 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.703 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:08.704 00:24:08.704 real 0m0.241s 00:24:08.704 user 0m0.141s 00:24:08.704 sys 0m0.116s 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:08.704 ************************************ 00:24:08.704 END TEST dma 00:24:08.704 ************************************ 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.704 ************************************ 00:24:08.704 START TEST nvmf_identify 00:24:08.704 ************************************ 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.704 * Looking for test storage... 00:24:08.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.704 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.967 --rc genhtml_branch_coverage=1 00:24:08.967 --rc genhtml_function_coverage=1 00:24:08.967 --rc genhtml_legend=1 00:24:08.967 --rc geninfo_all_blocks=1 00:24:08.967 --rc geninfo_unexecuted_blocks=1 00:24:08.967 00:24:08.967 ' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.967 --rc genhtml_branch_coverage=1 00:24:08.967 --rc genhtml_function_coverage=1 00:24:08.967 --rc genhtml_legend=1 00:24:08.967 --rc geninfo_all_blocks=1 00:24:08.967 --rc geninfo_unexecuted_blocks=1 00:24:08.967 00:24:08.967 ' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.967 --rc genhtml_branch_coverage=1 00:24:08.967 --rc genhtml_function_coverage=1 00:24:08.967 --rc genhtml_legend=1 00:24:08.967 --rc geninfo_all_blocks=1 00:24:08.967 --rc geninfo_unexecuted_blocks=1 00:24:08.967 00:24:08.967 ' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.967 --rc genhtml_branch_coverage=1 00:24:08.967 --rc genhtml_function_coverage=1 00:24:08.967 --rc genhtml_legend=1 00:24:08.967 --rc geninfo_all_blocks=1 00:24:08.967 --rc geninfo_unexecuted_blocks=1 00:24:08.967 00:24:08.967 ' 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.967 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.968 20:02:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.237 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.238 20:02:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:24:17.238 00:24:17.238 --- 10.0.0.2 ping statistics --- 00:24:17.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.238 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:24:17.238 00:24:17.238 --- 10.0.0.1 ping statistics --- 00:24:17.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.238 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3752397 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3752397 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3752397 ']' 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.238 20:02:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 [2024-11-26 20:02:17.218327] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:17.238 [2024-11-26 20:02:17.218398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.238 [2024-11-26 20:02:17.317784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.238 [2024-11-26 20:02:17.371952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.238 [2024-11-26 20:02:17.372007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.238 [2024-11-26 20:02:17.372017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.238 [2024-11-26 20:02:17.372024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.238 [2024-11-26 20:02:17.372031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.238 [2024-11-26 20:02:17.374120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.238 [2024-11-26 20:02:17.374280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.238 [2024-11-26 20:02:17.374327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.238 [2024-11-26 20:02:17.374329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 [2024-11-26 20:02:18.044050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.238 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 Malloc0 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 [2024-11-26 20:02:18.169156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 [ 00:24:17.499 { 00:24:17.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:17.499 "subtype": "Discovery", 00:24:17.499 "listen_addresses": [ 00:24:17.499 { 00:24:17.499 "trtype": "TCP", 00:24:17.499 "adrfam": "IPv4", 00:24:17.499 "traddr": "10.0.0.2", 00:24:17.499 "trsvcid": "4420" 00:24:17.499 } 00:24:17.499 ], 00:24:17.499 "allow_any_host": true, 00:24:17.499 "hosts": [] 00:24:17.499 }, 00:24:17.499 { 00:24:17.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.499 "subtype": "NVMe", 00:24:17.499 "listen_addresses": [ 00:24:17.499 { 00:24:17.499 "trtype": "TCP", 00:24:17.499 "adrfam": "IPv4", 00:24:17.499 "traddr": "10.0.0.2", 00:24:17.499 "trsvcid": "4420" 00:24:17.499 } 00:24:17.499 ], 00:24:17.499 "allow_any_host": true, 00:24:17.499 "hosts": [], 00:24:17.499 "serial_number": "SPDK00000000000001", 00:24:17.499 "model_number": "SPDK bdev Controller", 00:24:17.499 "max_namespaces": 32, 00:24:17.499 "min_cntlid": 1, 00:24:17.499 "max_cntlid": 65519, 00:24:17.499 "namespaces": [ 00:24:17.499 { 00:24:17.499 "nsid": 1, 00:24:17.499 "bdev_name": "Malloc0", 00:24:17.499 "name": "Malloc0", 00:24:17.499 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:17.499 "eui64": "ABCDEF0123456789", 00:24:17.499 "uuid": "60302fff-e41f-4c6d-aac2-12188daaac82" 00:24:17.499 } 00:24:17.499 ] 00:24:17.499 } 00:24:17.499 ] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.499 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:17.499 [2024-11-26 20:02:18.234930] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:17.499 [2024-11-26 20:02:18.234975] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752468 ] 00:24:17.499 [2024-11-26 20:02:18.288959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:17.499 [2024-11-26 20:02:18.289031] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.499 [2024-11-26 20:02:18.289037] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.499 [2024-11-26 20:02:18.289063] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.499 [2024-11-26 20:02:18.289075] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.499 [2024-11-26 20:02:18.293574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:17.499 [2024-11-26 20:02:18.293623] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18ba690 0 00:24:17.499 [2024-11-26 20:02:18.301180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.499 [2024-11-26 20:02:18.301205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.499 [2024-11-26 20:02:18.301210] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.499 [2024-11-26 20:02:18.301213] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.499 [2024-11-26 20:02:18.301258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.301265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.301269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.499 [2024-11-26 20:02:18.301287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.499 [2024-11-26 20:02:18.301310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.499 [2024-11-26 20:02:18.309176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.499 [2024-11-26 20:02:18.309188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.499 [2024-11-26 20:02:18.309192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.499 [2024-11-26 20:02:18.309211] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.499 [2024-11-26 20:02:18.309220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:17.499 [2024-11-26 20:02:18.309225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:17.499 [2024-11-26 20:02:18.309243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.499 [2024-11-26 20:02:18.309260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.499 [2024-11-26 20:02:18.309277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.499 [2024-11-26 20:02:18.309493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.499 [2024-11-26 20:02:18.309501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.499 [2024-11-26 20:02:18.309504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.499 [2024-11-26 20:02:18.309518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:17.499 [2024-11-26 20:02:18.309525] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:17.499 [2024-11-26 20:02:18.309532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.499 [2024-11-26 20:02:18.309547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.499 [2024-11-26 20:02:18.309559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.499 [2024-11-26 20:02:18.309772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.499 [2024-11-26 20:02:18.309778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.499 [2024-11-26 20:02:18.309782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.499 [2024-11-26 20:02:18.309791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:17.499 [2024-11-26 20:02:18.309807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.499 [2024-11-26 20:02:18.309814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.309822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.499 [2024-11-26 20:02:18.309829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.499 [2024-11-26 20:02:18.309840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.499 [2024-11-26 20:02:18.310045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.499 [2024-11-26 20:02:18.310051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.499 [2024-11-26 20:02:18.310055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.499 [2024-11-26 20:02:18.310058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.310064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.500 [2024-11-26 20:02:18.310074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.310088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.310098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.310300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.310307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.310311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.310320] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.500 [2024-11-26 20:02:18.310325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.500 [2024-11-26 20:02:18.310333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.500 [2024-11-26 20:02:18.310442] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:17.500 [2024-11-26 20:02:18.310447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.500 [2024-11-26 20:02:18.310455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.310470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.310480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.310687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.310693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.310704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.310713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.500 [2024-11-26 20:02:18.310724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.310738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.310748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.310924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.310930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.310934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.310942] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.500 [2024-11-26 20:02:18.310948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.310955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:17.500 [2024-11-26 20:02:18.310964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.310974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.310978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.310985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.310996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.311224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.500 [2024-11-26 20:02:18.311232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.500 [2024-11-26 20:02:18.311236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311240] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ba690): datao=0, datal=4096, cccid=0 00:24:17.500 [2024-11-26 20:02:18.311245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x191c100) on tqpair(0x18ba690): expected_datao=0, payload_size=4096 00:24:17.500 [2024-11-26 20:02:18.311250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311259] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.311414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.311417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.311430] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:17.500 [2024-11-26 20:02:18.311438] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:17.500 [2024-11-26 20:02:18.311443] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:17.500 [2024-11-26 20:02:18.311448] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:17.500 [2024-11-26 20:02:18.311453] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:17.500 [2024-11-26 20:02:18.311458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.311466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.311473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.500 [2024-11-26 20:02:18.311500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.311681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.311687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.311690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.311702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.500 [2024-11-26 20:02:18.311722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.500 [2024-11-26 20:02:18.311741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.500 [2024-11-26 20:02:18.311760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311764] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.500 [2024-11-26 20:02:18.311778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.311793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.500 [2024-11-26 20:02:18.311799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.311803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.311810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.311822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c100, cid 0, qid 0 00:24:17.500 [2024-11-26 20:02:18.311827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c280, cid 1, qid 0 00:24:17.500 [2024-11-26 20:02:18.311832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c400, cid 2, qid 0 00:24:17.500 [2024-11-26 20:02:18.311837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.500 [2024-11-26 20:02:18.311842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c700, cid 4, qid 0 00:24:17.500 [2024-11-26 20:02:18.312073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.312079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.312083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c700) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.312092] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:17.500 [2024-11-26 20:02:18.312097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:17.500 [2024-11-26 20:02:18.312108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.312119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.312129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c700, cid 4, qid 0 00:24:17.500 [2024-11-26 20:02:18.312313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.500 [2024-11-26 20:02:18.312320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.500 [2024-11-26 20:02:18.312323] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312327] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ba690): datao=0, datal=4096, cccid=4 00:24:17.500 [2024-11-26 20:02:18.312332] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x191c700) on tqpair(0x18ba690): expected_datao=0, payload_size=4096 00:24:17.500 [2024-11-26 20:02:18.312336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.312567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.312570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c700) on tqpair=0x18ba690 00:24:17.500 [2024-11-26 20:02:18.312587] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:17.500 [2024-11-26 20:02:18.312616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.312629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 20:02:18.312636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ba690) 00:24:17.500 [2024-11-26 20:02:18.312650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.500 [2024-11-26 20:02:18.312665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c700, cid 4, qid 0 00:24:17.500 [2024-11-26 20:02:18.312670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c880, cid 5, qid 0 00:24:17.500 [2024-11-26 20:02:18.312912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.500 [2024-11-26 20:02:18.312918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.500 [2024-11-26 20:02:18.312921] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312925] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ba690): datao=0, datal=1024, cccid=4 00:24:17.500 [2024-11-26 20:02:18.312930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x191c700) on tqpair(0x18ba690): expected_datao=0, payload_size=1024 00:24:17.500 [2024-11-26 20:02:18.312934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312941] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312944] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 20:02:18.312956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 20:02:18.312959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 20:02:18.312963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c880) on tqpair=0x18ba690 00:24:17.763 [2024-11-26 20:02:18.354323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.763 [2024-11-26 20:02:18.354339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.763 [2024-11-26 20:02:18.354342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c700) on tqpair=0x18ba690 00:24:17.763 [2024-11-26 20:02:18.354361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ba690) 00:24:17.763 [2024-11-26 20:02:18.354373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.763 [2024-11-26 20:02:18.354390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c700, cid 4, qid 0 00:24:17.763 [2024-11-26 20:02:18.354642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.763 [2024-11-26 20:02:18.354649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.763 [2024-11-26 20:02:18.354653] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ba690): datao=0, datal=3072, cccid=4 00:24:17.763 [2024-11-26 20:02:18.354661] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x191c700) on tqpair(0x18ba690): expected_datao=0, payload_size=3072 00:24:17.763 [2024-11-26 20:02:18.354665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354673] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354677] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.763 [2024-11-26 20:02:18.354844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.763 [2024-11-26 20:02:18.354853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c700) on tqpair=0x18ba690 00:24:17.763 [2024-11-26 20:02:18.354865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.354869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ba690) 00:24:17.763 [2024-11-26 20:02:18.354876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.763 [2024-11-26 20:02:18.354890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c700, cid 4, qid 0 00:24:17.763 [2024-11-26 20:02:18.355091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.763 [2024-11-26 20:02:18.355098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.763 [2024-11-26 20:02:18.355102] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.355105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ba690): datao=0, datal=8, cccid=4 00:24:17.763 [2024-11-26 20:02:18.355110] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x191c700) on tqpair(0x18ba690): expected_datao=0, payload_size=8 00:24:17.763 [2024-11-26 20:02:18.355114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.355121] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.355124] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.396337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.763 [2024-11-26 20:02:18.396349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.763 [2024-11-26 20:02:18.396353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.763 [2024-11-26 20:02:18.396357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c700) on tqpair=0x18ba690 00:24:17.763 ===================================================== 00:24:17.763 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:17.763 ===================================================== 00:24:17.763 Controller Capabilities/Features 00:24:17.763 ================================ 00:24:17.763 Vendor ID: 0000 00:24:17.763 Subsystem Vendor ID: 0000 00:24:17.763 Serial Number: .................... 00:24:17.763 Model Number: ........................................ 00:24:17.763 Firmware Version: 25.01 00:24:17.763 Recommended Arb Burst: 0 00:24:17.763 IEEE OUI Identifier: 00 00 00 00:24:17.763 Multi-path I/O 00:24:17.763 May have multiple subsystem ports: No 00:24:17.763 May have multiple controllers: No 00:24:17.763 Associated with SR-IOV VF: No 00:24:17.763 Max Data Transfer Size: 131072 00:24:17.763 Max Number of Namespaces: 0 00:24:17.763 Max Number of I/O Queues: 1024 00:24:17.763 NVMe Specification Version (VS): 1.3 00:24:17.763 NVMe Specification Version (Identify): 1.3 00:24:17.763 Maximum Queue Entries: 128 00:24:17.763 Contiguous Queues Required: Yes 00:24:17.763 Arbitration Mechanisms Supported 00:24:17.763 Weighted Round Robin: Not Supported 00:24:17.763 Vendor Specific: Not Supported 00:24:17.763 Reset Timeout: 15000 ms 00:24:17.763 Doorbell Stride: 4 bytes 00:24:17.763 NVM Subsystem Reset: Not Supported 00:24:17.763 Command Sets Supported 00:24:17.763 NVM Command Set: Supported 00:24:17.763 Boot Partition: Not Supported 00:24:17.763 Memory Page Size Minimum: 4096 bytes 00:24:17.763 Memory Page Size Maximum: 4096 bytes 00:24:17.763 Persistent Memory Region: Not Supported 00:24:17.763 Optional Asynchronous Events Supported 00:24:17.763 Namespace Attribute Notices: Not Supported 00:24:17.763 Firmware Activation Notices: Not Supported 00:24:17.763 ANA Change Notices: Not Supported 00:24:17.763 PLE Aggregate Log Change Notices: Not Supported 00:24:17.763 LBA Status Info Alert Notices: Not Supported 00:24:17.763 EGE Aggregate Log Change Notices: Not Supported 00:24:17.763 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.763 Zone Descriptor Change Notices: Not Supported 00:24:17.763 Discovery Log Change Notices: Supported 00:24:17.763 Controller Attributes 00:24:17.763 128-bit Host Identifier: Not Supported 00:24:17.763 Non-Operational Permissive Mode: Not Supported 00:24:17.763 NVM Sets: Not Supported 00:24:17.763 Read Recovery Levels: Not Supported 00:24:17.763 Endurance Groups: Not Supported 00:24:17.763 Predictable Latency Mode: Not Supported 00:24:17.763 Traffic Based Keep ALive: Not Supported 00:24:17.763 Namespace Granularity: Not Supported 00:24:17.763 SQ Associations: Not Supported 00:24:17.763 UUID List: Not Supported 00:24:17.763 Multi-Domain Subsystem: Not Supported 00:24:17.763 Fixed Capacity Management: Not Supported 00:24:17.763 Variable Capacity Management: Not Supported 00:24:17.763 Delete Endurance Group: Not Supported 00:24:17.763 Delete NVM Set: Not Supported 00:24:17.763 Extended LBA Formats Supported: Not Supported 00:24:17.763 Flexible Data Placement Supported: Not Supported 00:24:17.763 00:24:17.763 Controller Memory Buffer Support 00:24:17.763 ================================ 00:24:17.763 Supported: No 00:24:17.763 00:24:17.763 Persistent Memory Region Support 00:24:17.763 ================================ 00:24:17.763 Supported: No 00:24:17.763 00:24:17.763 Admin Command Set Attributes 00:24:17.763 ============================ 00:24:17.763 Security Send/Receive: Not Supported 00:24:17.763 Format NVM: Not Supported 00:24:17.763 Firmware Activate/Download: Not Supported 00:24:17.763 Namespace Management: Not Supported 00:24:17.763 Device Self-Test: Not Supported 00:24:17.763 Directives: Not Supported 00:24:17.763 NVMe-MI: Not Supported 00:24:17.763 Virtualization Management: Not Supported 00:24:17.763 Doorbell Buffer Config: Not Supported 00:24:17.763 Get LBA Status Capability: Not Supported 00:24:17.763 Command & Feature Lockdown Capability: Not Supported 00:24:17.763 Abort Command Limit: 1 00:24:17.763 Async Event Request Limit: 4 00:24:17.763 Number of Firmware Slots: N/A 00:24:17.763 Firmware Slot 1 Read-Only: N/A 00:24:17.763 Firmware Activation Without Reset: N/A 00:24:17.763 Multiple Update Detection Support: N/A 00:24:17.763 Firmware Update Granularity: No Information Provided 00:24:17.763 Per-Namespace SMART Log: No 00:24:17.763 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.763 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:17.763 Command Effects Log Page: Not Supported 00:24:17.763 Get Log Page Extended Data: Supported 00:24:17.763 Telemetry Log Pages: Not Supported 00:24:17.763 Persistent Event Log Pages: Not Supported 00:24:17.763 Supported Log Pages Log Page: May Support 00:24:17.763 Commands Supported & Effects Log Page: Not Supported 00:24:17.763 Feature Identifiers & Effects Log Page:May Support 00:24:17.763 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.763 Data Area 4 for Telemetry Log: Not Supported 00:24:17.763 Error Log Page Entries Supported: 128 00:24:17.763 Keep Alive: Not Supported 00:24:17.763 00:24:17.763 NVM Command Set Attributes 00:24:17.763 ========================== 00:24:17.763 Submission Queue Entry Size 00:24:17.763 Max: 1 00:24:17.763 Min: 1 00:24:17.763 Completion Queue Entry Size 00:24:17.763 Max: 1 00:24:17.763 Min: 1 00:24:17.763 Number of Namespaces: 0 00:24:17.763 Compare Command: Not Supported 00:24:17.763 Write Uncorrectable Command: Not Supported 00:24:17.763 Dataset Management Command: Not Supported 00:24:17.763 Write Zeroes Command: Not Supported 00:24:17.763 Set Features Save Field: Not Supported 00:24:17.763 Reservations: Not Supported 00:24:17.763 Timestamp: Not Supported 00:24:17.763 Copy: Not Supported 00:24:17.763 Volatile Write Cache: Not Present 00:24:17.763 Atomic Write Unit (Normal): 1 00:24:17.763 Atomic Write Unit (PFail): 1 00:24:17.763 Atomic Compare & Write Unit: 1 00:24:17.763 Fused Compare & Write: Supported 00:24:17.763 Scatter-Gather List 00:24:17.763 SGL Command Set: Supported 00:24:17.763 SGL Keyed: Supported 00:24:17.763 SGL Bit Bucket Descriptor: Not Supported 00:24:17.763 SGL Metadata Pointer: Not Supported 00:24:17.763 Oversized SGL: Not Supported 00:24:17.763 SGL Metadata Address: Not Supported 00:24:17.763 SGL Offset: Supported 00:24:17.763 Transport SGL Data Block: Not Supported 00:24:17.763 Replay Protected Memory Block: Not Supported 00:24:17.763 00:24:17.763 Firmware Slot Information 00:24:17.764 ========================= 00:24:17.764 Active slot: 0 00:24:17.764 00:24:17.764 00:24:17.764 Error Log 00:24:17.764 ========= 00:24:17.764 00:24:17.764 Active Namespaces 00:24:17.764 ================= 00:24:17.764 Discovery Log Page 00:24:17.764 ================== 00:24:17.764 Generation Counter: 2 00:24:17.764 Number of Records: 2 00:24:17.764 Record Format: 0 00:24:17.764 00:24:17.764 Discovery Log Entry 0 00:24:17.764 ---------------------- 00:24:17.764 Transport Type: 3 (TCP) 00:24:17.764 Address Family: 1 (IPv4) 00:24:17.764 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:17.764 Entry Flags: 00:24:17.764 Duplicate Returned Information: 1 00:24:17.764 Explicit Persistent Connection Support for Discovery: 1 00:24:17.764 Transport Requirements: 00:24:17.764 Secure Channel: Not Required 00:24:17.764 Port ID: 0 (0x0000) 00:24:17.764 Controller ID: 65535 (0xffff) 00:24:17.764 Admin Max SQ Size: 128 00:24:17.764 Transport Service Identifier: 4420 00:24:17.764 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:17.764 Transport Address: 10.0.0.2 00:24:17.764 Discovery Log Entry 1 00:24:17.764 ---------------------- 00:24:17.764 Transport Type: 3 (TCP) 00:24:17.764 Address Family: 1 (IPv4) 00:24:17.764 Subsystem Type: 2 (NVM Subsystem) 00:24:17.764 Entry Flags: 00:24:17.764 Duplicate Returned Information: 0 00:24:17.764 Explicit Persistent Connection Support for Discovery: 0 00:24:17.764 Transport Requirements: 00:24:17.764 Secure Channel: Not Required 00:24:17.764 Port ID: 0 (0x0000) 00:24:17.764 Controller ID: 65535 (0xffff) 00:24:17.764 Admin Max SQ Size: 128 00:24:17.764 Transport Service Identifier: 4420 00:24:17.764 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:17.764 Transport Address: 10.0.0.2 [2024-11-26 20:02:18.396461] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:17.764 [2024-11-26 20:02:18.396473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c100) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.396480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.764 [2024-11-26 20:02:18.396486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c280) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.396491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.764 [2024-11-26 20:02:18.396496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c400) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.396501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.764 [2024-11-26 20:02:18.396506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.396510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.764 [2024-11-26 20:02:18.396520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.396524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.396528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.396536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.396552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.396808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.396815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.396821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.396825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.396833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.396837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.396840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.396847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.396861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.397089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.397096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.397099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.397109] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:17.764 [2024-11-26 20:02:18.397113] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:17.764 [2024-11-26 20:02:18.397123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.397137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.397147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.397360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.397367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.397370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.397385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.397399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.397409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.397625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.397632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.397635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.397649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.397663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.397673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.397844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.397851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.397854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.397868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.397875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.397882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.397892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.398066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.398072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.398076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.398080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.398089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.398093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.398097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ba690) 00:24:17.764 [2024-11-26 20:02:18.398103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.398114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x191c580, cid 3, qid 0 00:24:17.764 [2024-11-26 20:02:18.402169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.402179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.402183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.402187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x191c580) on tqpair=0x18ba690 00:24:17.764 [2024-11-26 20:02:18.402195] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:17.764 00:24:17.764 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:17.764 [2024-11-26 20:02:18.451109] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:17.764 [2024-11-26 20:02:18.451154] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752522 ] 00:24:17.764 [2024-11-26 20:02:18.509448] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:17.764 [2024-11-26 20:02:18.509509] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.764 [2024-11-26 20:02:18.509515] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.764 [2024-11-26 20:02:18.509534] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.764 [2024-11-26 20:02:18.509546] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.764 [2024-11-26 20:02:18.513454] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:17.764 [2024-11-26 20:02:18.513504] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa43690 0 00:24:17.764 [2024-11-26 20:02:18.513707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.764 [2024-11-26 20:02:18.513715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.764 [2024-11-26 20:02:18.513719] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.764 [2024-11-26 20:02:18.513723] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.764 [2024-11-26 20:02:18.513754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.513759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.513764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.513777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.764 [2024-11-26 20:02:18.513793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.521174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.521185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.521189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.521206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.764 [2024-11-26 20:02:18.521214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:17.764 [2024-11-26 20:02:18.521219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:17.764 [2024-11-26 20:02:18.521235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.521251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.521265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.521383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.521390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.521394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.521407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:17.764 [2024-11-26 20:02:18.521414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:17.764 [2024-11-26 20:02:18.521422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.521436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.521447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.521603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.521614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.521617] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.521627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:17.764 [2024-11-26 20:02:18.521635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.521641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.521656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.521667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.521914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.521921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.521925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.521934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.521944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.521951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.521958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.521968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.522145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.522151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.522155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.522170] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.764 [2024-11-26 20:02:18.522175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.522183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.522293] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:17.764 [2024-11-26 20:02:18.522297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.522305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.522319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.522333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.522484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.522490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.522493] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.522502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.764 [2024-11-26 20:02:18.522512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.522526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.522536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.522727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.522733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.522736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.522745] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.764 [2024-11-26 20:02:18.522749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.764 [2024-11-26 20:02:18.522757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:17.764 [2024-11-26 20:02:18.522768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.764 [2024-11-26 20:02:18.522777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.522781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.764 [2024-11-26 20:02:18.522788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.764 [2024-11-26 20:02:18.522799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.764 [2024-11-26 20:02:18.523018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.764 [2024-11-26 20:02:18.523024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.764 [2024-11-26 20:02:18.523028] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.523032] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=4096, cccid=0 00:24:17.764 [2024-11-26 20:02:18.523036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5100) on tqpair(0xa43690): expected_datao=0, payload_size=4096 00:24:17.764 [2024-11-26 20:02:18.523041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.523060] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.523065] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.523202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.764 [2024-11-26 20:02:18.523209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.764 [2024-11-26 20:02:18.523213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.764 [2024-11-26 20:02:18.523217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.764 [2024-11-26 20:02:18.523227] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:17.764 [2024-11-26 20:02:18.523232] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:17.764 [2024-11-26 20:02:18.523237] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:17.764 [2024-11-26 20:02:18.523241] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:17.765 [2024-11-26 20:02:18.523246] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:17.765 [2024-11-26 20:02:18.523251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.765 [2024-11-26 20:02:18.523292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.765 [2024-11-26 20:02:18.523477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.765 [2024-11-26 20:02:18.523483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.765 [2024-11-26 20:02:18.523486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:17.765 [2024-11-26 20:02:18.523497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.765 [2024-11-26 20:02:18.523517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.765 [2024-11-26 20:02:18.523537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.765 [2024-11-26 20:02:18.523556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.765 [2024-11-26 20:02:18.523574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523607] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.765 [2024-11-26 20:02:18.523619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5100, cid 0, qid 0 00:24:17.765 [2024-11-26 20:02:18.523624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5280, cid 1, qid 0 00:24:17.765 [2024-11-26 20:02:18.523629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5400, cid 2, qid 0 00:24:17.765 [2024-11-26 20:02:18.523633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:17.765 [2024-11-26 20:02:18.523638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:17.765 [2024-11-26 20:02:18.523783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.765 [2024-11-26 20:02:18.523789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.765 [2024-11-26 20:02:18.523793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:17.765 [2024-11-26 20:02:18.523802] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:17.765 [2024-11-26 20:02:18.523807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.523831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.523838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.523845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.765 [2024-11-26 20:02:18.523855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:17.765 [2024-11-26 20:02:18.527176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.765 [2024-11-26 20:02:18.527185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.765 [2024-11-26 20:02:18.527189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.527193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:17.765 [2024-11-26 20:02:18.527262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.527272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.527280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.527284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.527291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.765 [2024-11-26 20:02:18.527305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:17.765 [2024-11-26 20:02:18.527460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.765 [2024-11-26 20:02:18.527466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.765 [2024-11-26 20:02:18.527470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.527473] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=4096, cccid=4 00:24:17.765 [2024-11-26 20:02:18.527478] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5700) on tqpair(0xa43690): expected_datao=0, payload_size=4096 00:24:17.765 [2024-11-26 20:02:18.527483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.527497] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.527501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.765 [2024-11-26 20:02:18.572184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.765 [2024-11-26 20:02:18.572187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:17.765 [2024-11-26 20:02:18.572208] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:17.765 [2024-11-26 20:02:18.572218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.572227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:17.765 [2024-11-26 20:02:18.572235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:17.765 [2024-11-26 20:02:18.572246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.765 [2024-11-26 20:02:18.572259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:17.765 [2024-11-26 20:02:18.572419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.765 [2024-11-26 20:02:18.572425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.765 [2024-11-26 20:02:18.572429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572433] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=4096, cccid=4 00:24:17.765 [2024-11-26 20:02:18.572437] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5700) on tqpair(0xa43690): expected_datao=0, payload_size=4096 00:24:17.765 [2024-11-26 20:02:18.572442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.765 [2024-11-26 20:02:18.572460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.614256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.614260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.614280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.614291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.614300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.614320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.028 [2024-11-26 20:02:18.614335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:18.028 [2024-11-26 20:02:18.614588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:18.028 [2024-11-26 20:02:18.614595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:18.028 [2024-11-26 20:02:18.614599] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614602] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=4096, cccid=4 00:24:18.028 [2024-11-26 20:02:18.614607] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5700) on tqpair(0xa43690): expected_datao=0, payload_size=4096 00:24:18.028 [2024-11-26 20:02:18.614612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614626] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.614630] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.659185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.659189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.659210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659251] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:18.028 [2024-11-26 20:02:18.659256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:18.028 [2024-11-26 20:02:18.659262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:18.028 [2024-11-26 20:02:18.659280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.659293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.028 [2024-11-26 20:02:18.659300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.659314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.028 [2024-11-26 20:02:18.659335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:18.028 [2024-11-26 20:02:18.659341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5880, cid 5, qid 0 00:24:18.028 [2024-11-26 20:02:18.659465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.659471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.659475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.659486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.659492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.659495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5880) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.659508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.659519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.028 [2024-11-26 20:02:18.659530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5880, cid 5, qid 0 00:24:18.028 [2024-11-26 20:02:18.659655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.659661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.659665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5880) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.659678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.659689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.028 [2024-11-26 20:02:18.659700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5880, cid 5, qid 0 00:24:18.028 [2024-11-26 20:02:18.659938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.659944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.659947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5880) on tqpair=0xa43690 00:24:18.028 [2024-11-26 20:02:18.659961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.659965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa43690) 00:24:18.028 [2024-11-26 20:02:18.659972] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.028 [2024-11-26 20:02:18.659982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5880, cid 5, qid 0 00:24:18.028 [2024-11-26 20:02:18.660290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.028 [2024-11-26 20:02:18.660298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.028 [2024-11-26 20:02:18.660302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.028 [2024-11-26 20:02:18.660306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5880) on tqpair=0xa43690 00:24:18.029 [2024-11-26 20:02:18.660323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa43690) 00:24:18.029 [2024-11-26 20:02:18.660335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.029 [2024-11-26 20:02:18.660348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa43690) 00:24:18.029 [2024-11-26 20:02:18.660358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.029 [2024-11-26 20:02:18.660366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa43690) 00:24:18.029 [2024-11-26 20:02:18.660376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.029 [2024-11-26 20:02:18.660384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa43690) 00:24:18.029 [2024-11-26 20:02:18.660394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.029 [2024-11-26 20:02:18.660406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5880, cid 5, qid 0 00:24:18.029 [2024-11-26 20:02:18.660411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5700, cid 4, qid 0 00:24:18.029 [2024-11-26 20:02:18.660416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5a00, cid 6, qid 0 00:24:18.029 [2024-11-26 20:02:18.660422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5b80, cid 7, qid 0 00:24:18.029 [2024-11-26 20:02:18.660649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:18.029 [2024-11-26 20:02:18.660656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:18.029 [2024-11-26 20:02:18.660660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660664] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=8192, cccid=5 00:24:18.029 [2024-11-26 20:02:18.660668] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5880) on tqpair(0xa43690): expected_datao=0, payload_size=8192 00:24:18.029 [2024-11-26 20:02:18.660673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660767] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:18.029 [2024-11-26 20:02:18.660779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:18.029 [2024-11-26 20:02:18.660782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660786] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=512, cccid=4 00:24:18.029 [2024-11-26 20:02:18.660791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5700) on tqpair(0xa43690): expected_datao=0, payload_size=512 00:24:18.029 [2024-11-26 20:02:18.660795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:18.029 [2024-11-26 20:02:18.660816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:18.029 [2024-11-26 20:02:18.660820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660823] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=512, cccid=6 00:24:18.029 [2024-11-26 20:02:18.660828] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5a00) on tqpair(0xa43690): expected_datao=0, payload_size=512 00:24:18.029 [2024-11-26 20:02:18.660835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660841] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660845] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:18.029 [2024-11-26 20:02:18.660856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:18.029 [2024-11-26 20:02:18.660860] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660863] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa43690): datao=0, datal=4096, cccid=7 00:24:18.029 [2024-11-26 20:02:18.660868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaa5b80) on tqpair(0xa43690): expected_datao=0, payload_size=4096 00:24:18.029 [2024-11-26 20:02:18.660872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660888] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.660892] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.702382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.029 [2024-11-26 20:02:18.702394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.029 [2024-11-26 20:02:18.702398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.702404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5880) on tqpair=0xa43690 00:24:18.029 [2024-11-26 20:02:18.702420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.029 [2024-11-26 20:02:18.702426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.029 [2024-11-26 20:02:18.702430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.702434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5700) on tqpair=0xa43690 00:24:18.029 [2024-11-26 20:02:18.702447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.029 [2024-11-26 20:02:18.702453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.029 [2024-11-26 20:02:18.702456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.702460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5a00) on tqpair=0xa43690 00:24:18.029 [2024-11-26 20:02:18.702467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.029 [2024-11-26 20:02:18.702473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.029 [2024-11-26 20:02:18.702477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.029 [2024-11-26 20:02:18.702484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5b80) on tqpair=0xa43690 00:24:18.029 ===================================================== 00:24:18.029 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.029 ===================================================== 00:24:18.029 Controller Capabilities/Features 00:24:18.029 ================================ 00:24:18.029 Vendor ID: 8086 00:24:18.029 Subsystem Vendor ID: 8086 00:24:18.029 Serial Number: SPDK00000000000001 00:24:18.029 Model Number: SPDK bdev Controller 00:24:18.029 Firmware Version: 25.01 00:24:18.029 Recommended Arb Burst: 6 00:24:18.029 IEEE OUI Identifier: e4 d2 5c 00:24:18.029 Multi-path I/O 00:24:18.029 May have multiple subsystem ports: Yes 00:24:18.029 May have multiple controllers: Yes 00:24:18.029 Associated with SR-IOV VF: No 00:24:18.029 Max Data Transfer Size: 131072 00:24:18.029 Max Number of Namespaces: 32 00:24:18.029 Max Number of I/O Queues: 127 00:24:18.029 NVMe Specification Version (VS): 1.3 00:24:18.029 NVMe Specification Version (Identify): 1.3 00:24:18.029 Maximum Queue Entries: 128 00:24:18.029 Contiguous Queues Required: Yes 00:24:18.029 Arbitration Mechanisms Supported 00:24:18.029 Weighted Round Robin: Not Supported 00:24:18.029 Vendor Specific: Not Supported 00:24:18.029 Reset Timeout: 15000 ms 00:24:18.029 Doorbell Stride: 4 bytes 00:24:18.029 NVM Subsystem Reset: Not Supported 00:24:18.029 Command Sets Supported 00:24:18.029 NVM Command Set: Supported 00:24:18.029 Boot Partition: Not Supported 00:24:18.029 Memory Page Size Minimum: 4096 bytes 00:24:18.029 Memory Page Size Maximum: 4096 bytes 00:24:18.029 Persistent Memory Region: Not Supported 00:24:18.029 Optional Asynchronous Events Supported 00:24:18.029 Namespace Attribute Notices: Supported 00:24:18.029 Firmware Activation Notices: Not Supported 00:24:18.029 ANA Change Notices: Not Supported 00:24:18.029 PLE Aggregate Log Change Notices: Not Supported 00:24:18.029 LBA Status Info Alert Notices: Not Supported 00:24:18.029 EGE Aggregate Log Change Notices: Not Supported 00:24:18.029 Normal NVM Subsystem Shutdown event: Not Supported 00:24:18.029 Zone Descriptor Change Notices: Not Supported 00:24:18.029 Discovery Log Change Notices: Not Supported 00:24:18.029 Controller Attributes 00:24:18.029 128-bit Host Identifier: Supported 00:24:18.029 Non-Operational Permissive Mode: Not Supported 00:24:18.029 NVM Sets: Not Supported 00:24:18.029 Read Recovery Levels: Not Supported 00:24:18.029 Endurance Groups: Not Supported 00:24:18.029 Predictable Latency Mode: Not Supported 00:24:18.029 Traffic Based Keep ALive: Not Supported 00:24:18.029 Namespace Granularity: Not Supported 00:24:18.029 SQ Associations: Not Supported 00:24:18.029 UUID List: Not Supported 00:24:18.029 Multi-Domain Subsystem: Not Supported 00:24:18.029 Fixed Capacity Management: Not Supported 00:24:18.029 Variable Capacity Management: Not Supported 00:24:18.029 Delete Endurance Group: Not Supported 00:24:18.029 Delete NVM Set: Not Supported 00:24:18.029 Extended LBA Formats Supported: Not Supported 00:24:18.029 Flexible Data Placement Supported: Not Supported 00:24:18.029 00:24:18.029 Controller Memory Buffer Support 00:24:18.029 ================================ 00:24:18.029 Supported: No 00:24:18.029 00:24:18.029 Persistent Memory Region Support 00:24:18.029 ================================ 00:24:18.029 Supported: No 00:24:18.029 00:24:18.029 Admin Command Set Attributes 00:24:18.029 ============================ 00:24:18.029 Security Send/Receive: Not Supported 00:24:18.030 Format NVM: Not Supported 00:24:18.030 Firmware Activate/Download: Not Supported 00:24:18.030 Namespace Management: Not Supported 00:24:18.030 Device Self-Test: Not Supported 00:24:18.030 Directives: Not Supported 00:24:18.030 NVMe-MI: Not Supported 00:24:18.030 Virtualization Management: Not Supported 00:24:18.030 Doorbell Buffer Config: Not Supported 00:24:18.030 Get LBA Status Capability: Not Supported 00:24:18.030 Command & Feature Lockdown Capability: Not Supported 00:24:18.030 Abort Command Limit: 4 00:24:18.030 Async Event Request Limit: 4 00:24:18.030 Number of Firmware Slots: N/A 00:24:18.030 Firmware Slot 1 Read-Only: N/A 00:24:18.030 Firmware Activation Without Reset: N/A 00:24:18.030 Multiple Update Detection Support: N/A 00:24:18.030 Firmware Update Granularity: No Information Provided 00:24:18.030 Per-Namespace SMART Log: No 00:24:18.030 Asymmetric Namespace Access Log Page: Not Supported 00:24:18.030 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:18.030 Command Effects Log Page: Supported 00:24:18.030 Get Log Page Extended Data: Supported 00:24:18.030 Telemetry Log Pages: Not Supported 00:24:18.030 Persistent Event Log Pages: Not Supported 00:24:18.030 Supported Log Pages Log Page: May Support 00:24:18.030 Commands Supported & Effects Log Page: Not Supported 00:24:18.030 Feature Identifiers & Effects Log Page:May Support 00:24:18.030 NVMe-MI Commands & Effects Log Page: May Support 00:24:18.030 Data Area 4 for Telemetry Log: Not Supported 00:24:18.030 Error Log Page Entries Supported: 128 00:24:18.030 Keep Alive: Supported 00:24:18.030 Keep Alive Granularity: 10000 ms 00:24:18.030 00:24:18.030 NVM Command Set Attributes 00:24:18.030 ========================== 00:24:18.030 Submission Queue Entry Size 00:24:18.030 Max: 64 00:24:18.030 Min: 64 00:24:18.030 Completion Queue Entry Size 00:24:18.030 Max: 16 00:24:18.030 Min: 16 00:24:18.030 Number of Namespaces: 32 00:24:18.030 Compare Command: Supported 00:24:18.030 Write Uncorrectable Command: Not Supported 00:24:18.030 Dataset Management Command: Supported 00:24:18.030 Write Zeroes Command: Supported 00:24:18.030 Set Features Save Field: Not Supported 00:24:18.030 Reservations: Supported 00:24:18.030 Timestamp: Not Supported 00:24:18.030 Copy: Supported 00:24:18.030 Volatile Write Cache: Present 00:24:18.030 Atomic Write Unit (Normal): 1 00:24:18.030 Atomic Write Unit (PFail): 1 00:24:18.030 Atomic Compare & Write Unit: 1 00:24:18.030 Fused Compare & Write: Supported 00:24:18.030 Scatter-Gather List 00:24:18.030 SGL Command Set: Supported 00:24:18.030 SGL Keyed: Supported 00:24:18.030 SGL Bit Bucket Descriptor: Not Supported 00:24:18.030 SGL Metadata Pointer: Not Supported 00:24:18.030 Oversized SGL: Not Supported 00:24:18.030 SGL Metadata Address: Not Supported 00:24:18.030 SGL Offset: Supported 00:24:18.030 Transport SGL Data Block: Not Supported 00:24:18.030 Replay Protected Memory Block: Not Supported 00:24:18.030 00:24:18.030 Firmware Slot Information 00:24:18.030 ========================= 00:24:18.030 Active slot: 1 00:24:18.030 Slot 1 Firmware Revision: 25.01 00:24:18.030 00:24:18.030 00:24:18.030 Commands Supported and Effects 00:24:18.030 ============================== 00:24:18.030 Admin Commands 00:24:18.030 -------------- 00:24:18.030 Get Log Page (02h): Supported 00:24:18.030 Identify (06h): Supported 00:24:18.030 Abort (08h): Supported 00:24:18.030 Set Features (09h): Supported 00:24:18.030 Get Features (0Ah): Supported 00:24:18.030 Asynchronous Event Request (0Ch): Supported 00:24:18.030 Keep Alive (18h): Supported 00:24:18.030 I/O Commands 00:24:18.030 ------------ 00:24:18.030 Flush (00h): Supported LBA-Change 00:24:18.030 Write (01h): Supported LBA-Change 00:24:18.030 Read (02h): Supported 00:24:18.030 Compare (05h): Supported 00:24:18.030 Write Zeroes (08h): Supported LBA-Change 00:24:18.030 Dataset Management (09h): Supported LBA-Change 00:24:18.030 Copy (19h): Supported LBA-Change 00:24:18.030 00:24:18.030 Error Log 00:24:18.030 ========= 00:24:18.030 00:24:18.030 Arbitration 00:24:18.030 =========== 00:24:18.030 Arbitration Burst: 1 00:24:18.030 00:24:18.030 Power Management 00:24:18.030 ================ 00:24:18.030 Number of Power States: 1 00:24:18.030 Current Power State: Power State #0 00:24:18.030 Power State #0: 00:24:18.030 Max Power: 0.00 W 00:24:18.030 Non-Operational State: Operational 00:24:18.030 Entry Latency: Not Reported 00:24:18.030 Exit Latency: Not Reported 00:24:18.030 Relative Read Throughput: 0 00:24:18.030 Relative Read Latency: 0 00:24:18.030 Relative Write Throughput: 0 00:24:18.030 Relative Write Latency: 0 00:24:18.030 Idle Power: Not Reported 00:24:18.030 Active Power: Not Reported 00:24:18.030 Non-Operational Permissive Mode: Not Supported 00:24:18.030 00:24:18.030 Health Information 00:24:18.030 ================== 00:24:18.030 Critical Warnings: 00:24:18.030 Available Spare Space: OK 00:24:18.030 Temperature: OK 00:24:18.030 Device Reliability: OK 00:24:18.030 Read Only: No 00:24:18.030 Volatile Memory Backup: OK 00:24:18.030 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:18.030 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:18.030 Available Spare: 0% 00:24:18.030 Available Spare Threshold: 0% 00:24:18.030 Life Percentage Used:[2024-11-26 20:02:18.702587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.702594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa43690) 00:24:18.030 [2024-11-26 20:02:18.702602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.030 [2024-11-26 20:02:18.702616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5b80, cid 7, qid 0 00:24:18.030 [2024-11-26 20:02:18.702751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.030 [2024-11-26 20:02:18.702759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.030 [2024-11-26 20:02:18.702762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.702766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5b80) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.702802] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:18.030 [2024-11-26 20:02:18.702812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5100) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.702820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.030 [2024-11-26 20:02:18.702832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5280) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.702838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.030 [2024-11-26 20:02:18.702843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5400) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.030 [2024-11-26 20:02:18.702853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.702857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.030 [2024-11-26 20:02:18.702868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.702874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.702879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.030 [2024-11-26 20:02:18.702887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.030 [2024-11-26 20:02:18.702900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.030 [2024-11-26 20:02:18.703131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.030 [2024-11-26 20:02:18.703140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.030 [2024-11-26 20:02:18.703143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.703147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.030 [2024-11-26 20:02:18.703154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.707166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.030 [2024-11-26 20:02:18.707171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.030 [2024-11-26 20:02:18.707178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.030 [2024-11-26 20:02:18.707194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.030 [2024-11-26 20:02:18.707365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.030 [2024-11-26 20:02:18.707372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.030 [2024-11-26 20:02:18.707376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.707385] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:18.031 [2024-11-26 20:02:18.707390] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:18.031 [2024-11-26 20:02:18.707400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.707414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.707424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.707587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.707594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.707597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.707614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.707628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.707638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.707836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.707843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.707847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.707860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.707868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.707875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.707885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.708067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.708074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.708077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.708091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.708106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.708117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.708386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.708393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.708397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.708411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.708425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.708436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.708604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.708610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.708614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.708627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.708644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.708654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.708843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.708850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.708853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.708867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.708874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.708881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.708892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.709137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.709143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.709147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.709167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.709182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.709192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.709391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.709397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.709400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.709415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.709429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.709440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.709628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.709635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.709639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.709653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.709670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.709680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.709921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.709928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.709931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.709945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.709952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.709959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.031 [2024-11-26 20:02:18.709969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.031 [2024-11-26 20:02:18.710150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.031 [2024-11-26 20:02:18.710162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.031 [2024-11-26 20:02:18.710169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.710173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.031 [2024-11-26 20:02:18.710182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.710187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.031 [2024-11-26 20:02:18.710191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.031 [2024-11-26 20:02:18.710197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.032 [2024-11-26 20:02:18.710208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.032 [2024-11-26 20:02:18.710469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.032 [2024-11-26 20:02:18.710476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.032 [2024-11-26 20:02:18.710479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.032 [2024-11-26 20:02:18.710493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.032 [2024-11-26 20:02:18.710507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.032 [2024-11-26 20:02:18.710518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.032 [2024-11-26 20:02:18.710689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.032 [2024-11-26 20:02:18.710696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.032 [2024-11-26 20:02:18.710699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.032 [2024-11-26 20:02:18.710713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.032 [2024-11-26 20:02:18.710730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.032 [2024-11-26 20:02:18.710741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.032 [2024-11-26 20:02:18.710922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.032 [2024-11-26 20:02:18.710928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.032 [2024-11-26 20:02:18.710932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.032 [2024-11-26 20:02:18.710946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.710953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.032 [2024-11-26 20:02:18.710960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.032 [2024-11-26 20:02:18.710971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.032 [2024-11-26 20:02:18.711143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.032 [2024-11-26 20:02:18.711150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.032 [2024-11-26 20:02:18.711153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.715163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.032 [2024-11-26 20:02:18.715176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.715180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.715184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa43690) 00:24:18.032 [2024-11-26 20:02:18.715191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.032 [2024-11-26 20:02:18.715203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaa5580, cid 3, qid 0 00:24:18.032 [2024-11-26 20:02:18.715372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:18.032 [2024-11-26 20:02:18.715379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:18.032 [2024-11-26 20:02:18.715382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:18.032 [2024-11-26 20:02:18.715386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaa5580) on tqpair=0xa43690 00:24:18.032 [2024-11-26 20:02:18.715394] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:18.032 0% 00:24:18.032 Data Units Read: 0 00:24:18.032 Data Units Written: 0 00:24:18.032 Host Read Commands: 0 00:24:18.032 Host Write Commands: 0 00:24:18.032 Controller Busy Time: 0 minutes 00:24:18.032 Power Cycles: 0 00:24:18.032 Power On Hours: 0 hours 00:24:18.032 Unsafe Shutdowns: 0 00:24:18.032 Unrecoverable Media Errors: 0 00:24:18.032 Lifetime Error Log Entries: 0 00:24:18.032 Warning Temperature Time: 0 minutes 00:24:18.032 Critical Temperature Time: 0 minutes 00:24:18.032 00:24:18.032 Number of Queues 00:24:18.032 ================ 00:24:18.032 Number of I/O Submission Queues: 127 00:24:18.032 Number of I/O Completion Queues: 127 00:24:18.032 00:24:18.032 Active Namespaces 00:24:18.032 ================= 00:24:18.032 Namespace ID:1 00:24:18.032 Error Recovery Timeout: Unlimited 00:24:18.032 Command Set Identifier: NVM (00h) 00:24:18.032 Deallocate: Supported 00:24:18.032 Deallocated/Unwritten Error: Not Supported 00:24:18.032 Deallocated Read Value: Unknown 00:24:18.032 Deallocate in Write Zeroes: Not Supported 00:24:18.032 Deallocated Guard Field: 0xFFFF 00:24:18.032 Flush: Supported 00:24:18.032 Reservation: Supported 00:24:18.032 Namespace Sharing Capabilities: Multiple Controllers 00:24:18.032 Size (in LBAs): 131072 (0GiB) 00:24:18.032 Capacity (in LBAs): 131072 (0GiB) 00:24:18.032 Utilization (in LBAs): 131072 (0GiB) 00:24:18.032 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:18.032 EUI64: ABCDEF0123456789 00:24:18.032 UUID: 60302fff-e41f-4c6d-aac2-12188daaac82 00:24:18.032 Thin Provisioning: Not Supported 00:24:18.032 Per-NS Atomic Units: Yes 00:24:18.032 Atomic Boundary Size (Normal): 0 00:24:18.032 Atomic Boundary Size (PFail): 0 00:24:18.032 Atomic Boundary Offset: 0 00:24:18.032 Maximum Single Source Range Length: 65535 00:24:18.032 Maximum Copy Length: 65535 00:24:18.032 Maximum Source Range Count: 1 00:24:18.032 NGUID/EUI64 Never Reused: No 00:24:18.032 Namespace Write Protected: No 00:24:18.032 Number of LBA Formats: 1 00:24:18.032 Current LBA Format: LBA Format #00 00:24:18.032 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:18.032 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.032 rmmod nvme_tcp 00:24:18.032 rmmod nvme_fabrics 00:24:18.032 rmmod nvme_keyring 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3752397 ']' 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3752397 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3752397 ']' 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3752397 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.032 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3752397 00:24:18.293 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.293 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.293 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3752397' 00:24:18.293 killing process with pid 3752397 00:24:18.293 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3752397 00:24:18.293 20:02:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3752397 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.293 20:02:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.833 00:24:20.833 real 0m11.787s 00:24:20.833 user 0m9.016s 00:24:20.833 sys 0m6.201s 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.833 ************************************ 00:24:20.833 END TEST nvmf_identify 00:24:20.833 ************************************ 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.833 ************************************ 00:24:20.833 START TEST nvmf_perf 00:24:20.833 ************************************ 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.833 * Looking for test storage... 00:24:20.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.833 --rc genhtml_branch_coverage=1 00:24:20.833 --rc genhtml_function_coverage=1 00:24:20.833 --rc genhtml_legend=1 00:24:20.833 --rc geninfo_all_blocks=1 00:24:20.833 --rc geninfo_unexecuted_blocks=1 00:24:20.833 00:24:20.833 ' 00:24:20.833 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.833 --rc genhtml_branch_coverage=1 00:24:20.834 --rc genhtml_function_coverage=1 00:24:20.834 --rc genhtml_legend=1 00:24:20.834 --rc geninfo_all_blocks=1 00:24:20.834 --rc geninfo_unexecuted_blocks=1 00:24:20.834 00:24:20.834 ' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.834 --rc genhtml_branch_coverage=1 00:24:20.834 --rc genhtml_function_coverage=1 00:24:20.834 --rc genhtml_legend=1 00:24:20.834 --rc geninfo_all_blocks=1 00:24:20.834 --rc geninfo_unexecuted_blocks=1 00:24:20.834 00:24:20.834 ' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.834 --rc genhtml_branch_coverage=1 00:24:20.834 --rc genhtml_function_coverage=1 00:24:20.834 --rc genhtml_legend=1 00:24:20.834 --rc geninfo_all_blocks=1 00:24:20.834 --rc geninfo_unexecuted_blocks=1 00:24:20.834 00:24:20.834 ' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.834 20:02:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:28.983 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:28.983 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.983 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:28.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:28.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:24:28.984 00:24:28.984 --- 10.0.0.2 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:28.984 00:24:28.984 --- 10.0.0.1 ping statistics --- 00:24:28.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.984 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.984 20:02:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3756806 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3756806 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3756806 ']' 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.984 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.984 [2024-11-26 20:02:29.102521] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:28.984 [2024-11-26 20:02:29.102588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.984 [2024-11-26 20:02:29.205938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.984 [2024-11-26 20:02:29.260012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.984 [2024-11-26 20:02:29.260066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.984 [2024-11-26 20:02:29.260074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.984 [2024-11-26 20:02:29.260081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.984 [2024-11-26 20:02:29.260087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.984 [2024-11-26 20:02:29.262154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.984 [2024-11-26 20:02:29.262314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.984 [2024-11-26 20:02:29.262584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.984 [2024-11-26 20:02:29.262585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:29.246 20:02:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:29.820 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:29.820 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:30.081 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:30.081 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:30.343 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:30.343 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:30.343 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:30.343 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:30.343 20:02:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:30.343 [2024-11-26 20:02:31.098076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.343 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.605 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.605 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.867 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.867 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:31.129 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.129 [2024-11-26 20:02:31.840877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.129 20:02:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.391 20:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:31.391 20:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:31.391 20:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:31.391 20:02:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:32.776 Initializing NVMe Controllers 00:24:32.776 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:32.776 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:32.776 Initialization complete. Launching workers. 00:24:32.776 ======================================================== 00:24:32.776 Latency(us) 00:24:32.776 Device Information : IOPS MiB/s Average min max 00:24:32.776 PCIE (0000:65:00.0) NSID 1 from core 0: 78358.17 306.09 407.89 13.28 4745.34 00:24:32.776 ======================================================== 00:24:32.776 Total : 78358.17 306.09 407.89 13.28 4745.34 00:24:32.776 00:24:32.776 20:02:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:34.158 Initializing NVMe Controllers 00:24:34.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:34.158 Initialization complete. Launching workers. 00:24:34.158 ======================================================== 00:24:34.158 Latency(us) 00:24:34.158 Device Information : IOPS MiB/s Average min max 00:24:34.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.00 0.26 15232.38 121.76 45935.15 00:24:34.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19256.05 5988.69 47905.08 00:24:34.158 ======================================================== 00:24:34.158 Total : 120.00 0.47 17009.50 121.76 47905.08 00:24:34.158 00:24:34.158 20:02:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.543 Initializing NVMe Controllers 00:24:35.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:35.543 Initialization complete. Launching workers. 00:24:35.543 ======================================================== 00:24:35.543 Latency(us) 00:24:35.543 Device Information : IOPS MiB/s Average min max 00:24:35.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12738.86 49.76 2512.16 368.73 6183.60 00:24:35.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3855.54 15.06 8300.86 7253.26 15924.03 00:24:35.543 ======================================================== 00:24:35.543 Total : 16594.40 64.82 3857.11 368.73 15924.03 00:24:35.543 00:24:35.543 20:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:35.543 20:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:35.543 20:02:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.085 Initializing NVMe Controllers 00:24:38.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.085 Controller IO queue size 128, less than required. 00:24:38.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.085 Controller IO queue size 128, less than required. 00:24:38.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.085 Initialization complete. Launching workers. 00:24:38.085 ======================================================== 00:24:38.085 Latency(us) 00:24:38.085 Device Information : IOPS MiB/s Average min max 00:24:38.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1837.27 459.32 71064.16 33934.92 118409.97 00:24:38.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.76 151.69 221480.59 72255.67 344649.05 00:24:38.085 ======================================================== 00:24:38.085 Total : 2444.04 611.01 108407.02 33934.92 344649.05 00:24:38.085 00:24:38.085 20:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:38.085 No valid NVMe controllers or AIO or URING devices found 00:24:38.085 Initializing NVMe Controllers 00:24:38.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.085 Controller IO queue size 128, less than required. 00:24:38.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.085 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:38.085 Controller IO queue size 128, less than required. 00:24:38.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.085 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:38.085 WARNING: Some requested NVMe devices were skipped 00:24:38.085 20:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:40.630 Initializing NVMe Controllers 00:24:40.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.630 Controller IO queue size 128, less than required. 00:24:40.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.631 Controller IO queue size 128, less than required. 00:24:40.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.631 Initialization complete. Launching workers. 00:24:40.631 00:24:40.631 ==================== 00:24:40.631 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:40.631 TCP transport: 00:24:40.631 polls: 42299 00:24:40.631 idle_polls: 26610 00:24:40.631 sock_completions: 15689 00:24:40.631 nvme_completions: 7253 00:24:40.631 submitted_requests: 10882 00:24:40.631 queued_requests: 1 00:24:40.631 00:24:40.631 ==================== 00:24:40.631 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:40.631 TCP transport: 00:24:40.631 polls: 38835 00:24:40.631 idle_polls: 24450 00:24:40.631 sock_completions: 14385 00:24:40.631 nvme_completions: 7407 00:24:40.631 submitted_requests: 11172 00:24:40.631 queued_requests: 1 00:24:40.631 ======================================================== 00:24:40.631 Latency(us) 00:24:40.631 Device Information : IOPS MiB/s Average min max 00:24:40.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1812.99 453.25 72393.67 34762.77 131869.39 00:24:40.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1851.49 462.87 69578.95 32345.08 117732.28 00:24:40.631 ======================================================== 00:24:40.631 Total : 3664.49 916.12 70971.53 32345.08 131869.39 00:24:40.631 00:24:40.631 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:40.631 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.892 rmmod nvme_tcp 00:24:40.892 rmmod nvme_fabrics 00:24:40.892 rmmod nvme_keyring 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3756806 ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3756806 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3756806 ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3756806 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3756806 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3756806' 00:24:40.892 killing process with pid 3756806 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3756806 00:24:40.892 20:02:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3756806 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.806 20:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.353 00:24:45.353 real 0m24.405s 00:24:45.353 user 0m58.689s 00:24:45.353 sys 0m8.776s 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:45.353 ************************************ 00:24:45.353 END TEST nvmf_perf 00:24:45.353 ************************************ 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.353 ************************************ 00:24:45.353 START TEST nvmf_fio_host 00:24:45.353 ************************************ 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.353 * Looking for test storage... 00:24:45.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.353 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.354 --rc genhtml_branch_coverage=1 00:24:45.354 --rc genhtml_function_coverage=1 00:24:45.354 --rc genhtml_legend=1 00:24:45.354 --rc geninfo_all_blocks=1 00:24:45.354 --rc geninfo_unexecuted_blocks=1 00:24:45.354 00:24:45.354 ' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.354 --rc genhtml_branch_coverage=1 00:24:45.354 --rc genhtml_function_coverage=1 00:24:45.354 --rc genhtml_legend=1 00:24:45.354 --rc geninfo_all_blocks=1 00:24:45.354 --rc geninfo_unexecuted_blocks=1 00:24:45.354 00:24:45.354 ' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.354 --rc genhtml_branch_coverage=1 00:24:45.354 --rc genhtml_function_coverage=1 00:24:45.354 --rc genhtml_legend=1 00:24:45.354 --rc geninfo_all_blocks=1 00:24:45.354 --rc geninfo_unexecuted_blocks=1 00:24:45.354 00:24:45.354 ' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.354 --rc genhtml_branch_coverage=1 00:24:45.354 --rc genhtml_function_coverage=1 00:24:45.354 --rc genhtml_legend=1 00:24:45.354 --rc geninfo_all_blocks=1 00:24:45.354 --rc geninfo_unexecuted_blocks=1 00:24:45.354 00:24:45.354 ' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.354 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.355 20:02:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.495 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:53.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:53.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:53.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:53.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:24:53.496 00:24:53.496 --- 10.0.0.2 ping statistics --- 00:24:53.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.496 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:53.496 00:24:53.496 --- 10.0.0.1 ping statistics --- 00:24:53.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.496 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3763867 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3763867 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3763867 ']' 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.496 20:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.496 [2024-11-26 20:02:53.529817] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:24:53.496 [2024-11-26 20:02:53.529886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.496 [2024-11-26 20:02:53.632511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.496 [2024-11-26 20:02:53.684961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.496 [2024-11-26 20:02:53.685015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.496 [2024-11-26 20:02:53.685023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.496 [2024-11-26 20:02:53.685030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.496 [2024-11-26 20:02:53.685037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.496 [2024-11-26 20:02:53.687107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.496 [2024-11-26 20:02:53.687265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.497 [2024-11-26 20:02:53.687597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.497 [2024-11-26 20:02:53.687600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.758 [2024-11-26 20:02:54.521661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:53.758 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.019 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:54.019 Malloc1 00:24:54.019 20:02:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.280 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:54.541 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.802 [2024-11-26 20:02:55.385697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.802 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:55.086 20:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:55.351 fio-3.35 00:24:55.351 Starting 1 thread 00:24:57.909 00:24:57.909 test: (groupid=0, jobs=1): err= 0: pid=3764637: Tue Nov 26 20:02:58 2024 00:24:57.909 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec) 00:24:57.909 slat (usec): min=2, max=280, avg= 2.18, stdev= 2.41 00:24:57.909 clat (usec): min=3596, max=9166, avg=5117.56, stdev=394.36 00:24:57.909 lat (usec): min=3599, max=9172, avg=5119.74, stdev=394.60 00:24:57.909 clat percentiles (usec): 00:24:57.909 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:57.909 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:24:57.909 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:57.909 | 99.00th=[ 5997], 99.50th=[ 6849], 99.90th=[ 8586], 99.95th=[ 8848], 00:24:57.909 | 99.99th=[ 9110] 00:24:57.909 bw ( KiB/s): min=54064, max=55616, per=99.94%, avg=55158.00, stdev=733.19, samples=4 00:24:57.909 iops : min=13516, max=13904, avg=13789.50, stdev=183.30, samples=4 00:24:57.909 write: IOPS=13.8k, BW=53.8MiB/s (56.5MB/s)(108MiB/2004msec); 0 zone resets 00:24:57.909 slat (usec): min=2, max=277, avg= 2.25, stdev= 1.84 00:24:57.909 clat (usec): min=2884, max=7998, avg=4136.70, stdev=342.19 00:24:57.909 lat (usec): min=2902, max=8000, avg=4138.94, stdev=342.50 00:24:57.909 clat percentiles (usec): 00:24:57.909 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:57.909 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:57.909 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:24:57.909 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7308], 99.95th=[ 7504], 00:24:57.909 | 99.99th=[ 7963] 00:24:57.909 bw ( KiB/s): min=54432, max=55488, per=100.00%, avg=55128.00, stdev=475.62, samples=4 00:24:57.909 iops : min=13608, max=13872, avg=13782.00, stdev=118.91, samples=4 00:24:57.910 lat (msec) : 4=16.00%, 10=84.00% 00:24:57.910 cpu : usr=77.68%, sys=21.17%, ctx=39, majf=0, minf=16 00:24:57.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:57.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:57.910 issued rwts: total=27651,27619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:57.910 00:24:57.910 Run status group 0 (all jobs): 00:24:57.910 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:57.910 WRITE: bw=53.8MiB/s (56.5MB/s), 53.8MiB/s-53.8MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:57.910 20:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.178 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:58.178 fio-3.35 00:24:58.178 Starting 1 thread 00:25:00.722 00:25:00.722 test: (groupid=0, jobs=1): err= 0: pid=3765225: Tue Nov 26 20:03:01 2024 00:25:00.722 read: IOPS=9537, BW=149MiB/s (156MB/s)(299MiB/2004msec) 00:25:00.722 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.60 00:25:00.722 clat (usec): min=1665, max=16082, avg=8172.54, stdev=1903.55 00:25:00.722 lat (usec): min=1669, max=16086, avg=8176.16, stdev=1903.69 00:25:00.722 clat percentiles (usec): 00:25:00.722 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:25:00.722 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8586], 00:25:00.722 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:25:00.722 | 99.00th=[13042], 99.50th=[13698], 99.90th=[14746], 99.95th=[15664], 00:25:00.722 | 99.99th=[16057] 00:25:00.722 bw ( KiB/s): min=69120, max=83488, per=49.68%, avg=75816.00, stdev=5898.51, samples=4 00:25:00.722 iops : min= 4320, max= 5218, avg=4738.50, stdev=368.66, samples=4 00:25:00.722 write: IOPS=5518, BW=86.2MiB/s (90.4MB/s)(155MiB/1795msec); 0 zone resets 00:25:00.722 slat (usec): min=39, max=345, avg=40.95, stdev= 7.57 00:25:00.722 clat (usec): min=1747, max=15344, avg=9119.08, stdev=1339.72 00:25:00.722 lat (usec): min=1787, max=15477, avg=9160.03, stdev=1341.62 00:25:00.722 clat percentiles (usec): 00:25:00.722 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:25:00.722 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:25:00.722 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:25:00.722 | 99.00th=[12518], 99.50th=[13304], 99.90th=[15008], 99.95th=[15270], 00:25:00.722 | 99.99th=[15401] 00:25:00.722 bw ( KiB/s): min=72128, max=86976, per=89.29%, avg=78840.00, stdev=6269.08, samples=4 00:25:00.722 iops : min= 4508, max= 5436, avg=4927.50, stdev=391.82, samples=4 00:25:00.722 lat (msec) : 2=0.02%, 4=0.53%, 10=79.61%, 20=19.84% 00:25:00.722 cpu : usr=85.32%, sys=13.33%, ctx=11, majf=0, minf=24 00:25:00.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.722 issued rwts: total=19113,9906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.722 00:25:00.722 Run status group 0 (all jobs): 00:25:00.722 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=299MiB (313MB), run=2004-2004msec 00:25:00.722 WRITE: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=155MiB (162MB), run=1795-1795msec 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.722 rmmod nvme_tcp 00:25:00.722 rmmod nvme_fabrics 00:25:00.722 rmmod nvme_keyring 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3763867 ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3763867 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3763867 ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3763867 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763867 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763867' 00:25:00.722 killing process with pid 3763867 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3763867 00:25:00.722 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3763867 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.983 20:03:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.525 20:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.526 00:25:03.526 real 0m18.008s 00:25:03.526 user 1m11.146s 00:25:03.526 sys 0m7.637s 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.526 ************************************ 00:25:03.526 END TEST nvmf_fio_host 00:25:03.526 ************************************ 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.526 ************************************ 00:25:03.526 START TEST nvmf_failover 00:25:03.526 ************************************ 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.526 * Looking for test storage... 00:25:03.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:03.526 20:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.526 --rc genhtml_branch_coverage=1 00:25:03.526 --rc genhtml_function_coverage=1 00:25:03.526 --rc genhtml_legend=1 00:25:03.526 --rc geninfo_all_blocks=1 00:25:03.526 --rc geninfo_unexecuted_blocks=1 00:25:03.526 00:25:03.526 ' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.526 --rc genhtml_branch_coverage=1 00:25:03.526 --rc genhtml_function_coverage=1 00:25:03.526 --rc genhtml_legend=1 00:25:03.526 --rc geninfo_all_blocks=1 00:25:03.526 --rc geninfo_unexecuted_blocks=1 00:25:03.526 00:25:03.526 ' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.526 --rc genhtml_branch_coverage=1 00:25:03.526 --rc genhtml_function_coverage=1 00:25:03.526 --rc genhtml_legend=1 00:25:03.526 --rc geninfo_all_blocks=1 00:25:03.526 --rc geninfo_unexecuted_blocks=1 00:25:03.526 00:25:03.526 ' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.526 --rc genhtml_branch_coverage=1 00:25:03.526 --rc genhtml_function_coverage=1 00:25:03.526 --rc genhtml_legend=1 00:25:03.526 --rc geninfo_all_blocks=1 00:25:03.526 --rc geninfo_unexecuted_blocks=1 00:25:03.526 00:25:03.526 ' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.526 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.527 20:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:11.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:11.664 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.664 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:11.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:11.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:11.665 00:25:11.665 --- 10.0.0.2 ping statistics --- 00:25:11.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.665 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:25:11.665 00:25:11.665 --- 10.0.0.1 ping statistics --- 00:25:11.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.665 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3770440 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3770440 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3770440 ']' 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.665 20:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.665 [2024-11-26 20:03:11.667852] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:25:11.665 [2024-11-26 20:03:11.667918] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.665 [2024-11-26 20:03:11.767484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.665 [2024-11-26 20:03:11.819619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.665 [2024-11-26 20:03:11.819669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.665 [2024-11-26 20:03:11.819677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.665 [2024-11-26 20:03:11.819685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.665 [2024-11-26 20:03:11.819690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.665 [2024-11-26 20:03:11.821738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.665 [2024-11-26 20:03:11.821894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.665 [2024-11-26 20:03:11.821896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.665 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.665 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:11.665 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.665 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.665 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.926 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:11.926 [2024-11-26 20:03:12.692230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.926 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.187 Malloc0 00:25:12.187 20:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.447 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.708 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.708 [2024-11-26 20:03:13.496642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.968 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:12.968 [2024-11-26 20:03:13.689198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.968 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.229 [2024-11-26 20:03:13.885896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3770932 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3770932 /var/tmp/bdevperf.sock 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3770932 ']' 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.229 20:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.171 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.171 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.171 20:03:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.431 NVMe0n1 00:25:14.431 20:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:15.002 00:25:15.002 20:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:15.002 20:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3771233 00:25:15.002 20:03:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:15.992 20:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.992 [2024-11-26 20:03:16.799517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:15.992 [2024-11-26 20:03:16.799668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7ed0 is same with the state(6) to be set 00:25:16.253 20:03:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:19.557 20:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.557 00:25:19.557 20:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.557 [2024-11-26 20:03:20.249944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.249981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.249987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.249992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.249997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.557 [2024-11-26 20:03:20.250146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 [2024-11-26 20:03:20.250296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8cf0 is same with the state(6) to be set 00:25:19.558 20:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:22.862 20:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.862 [2024-11-26 20:03:23.438473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.862 20:03:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:23.808 20:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:24.069 [2024-11-26 20:03:24.629666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 [2024-11-26 20:03:24.629767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9bf0 is same with the state(6) to be set 00:25:24.069 20:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3771233 00:25:30.673 { 00:25:30.673 "results": [ 00:25:30.673 { 00:25:30.673 "job": "NVMe0n1", 00:25:30.673 "core_mask": "0x1", 00:25:30.673 "workload": "verify", 00:25:30.673 "status": "finished", 00:25:30.673 "verify_range": { 00:25:30.673 "start": 0, 00:25:30.673 "length": 16384 00:25:30.673 }, 00:25:30.673 "queue_depth": 128, 00:25:30.673 "io_size": 4096, 00:25:30.673 "runtime": 15.009187, 00:25:30.673 "iops": 12225.245777802622, 00:25:30.673 "mibps": 47.75486631954149, 00:25:30.673 "io_failed": 17461, 00:25:30.673 "io_timeout": 0, 00:25:30.673 "avg_latency_us": 9539.121729368208, 00:25:30.673 "min_latency_us": 546.1333333333333, 00:25:30.673 "max_latency_us": 28835.84 00:25:30.673 } 00:25:30.673 ], 00:25:30.673 "core_count": 1 00:25:30.673 } 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3770932 ']' 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3770932' 00:25:30.673 killing process with pid 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3770932 00:25:30.673 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.673 [2024-11-26 20:03:13.972947] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:25:30.673 [2024-11-26 20:03:13.973031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770932 ] 00:25:30.673 [2024-11-26 20:03:14.067875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.673 [2024-11-26 20:03:14.121424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.673 Running I/O for 15 seconds... 00:25:30.673 11684.00 IOPS, 45.64 MiB/s [2024-11-26T19:03:31.494Z] [2024-11-26 20:03:16.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.673 [2024-11-26 20:03:16.803548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.673 [2024-11-26 20:03:16.803555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.674 [2024-11-26 20:03:16.803611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.674 [2024-11-26 20:03:16.803627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.674 [2024-11-26 20:03:16.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.674 [2024-11-26 20:03:16.803660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.674 [2024-11-26 20:03:16.803677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.803988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.803996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.674 [2024-11-26 20:03:16.804217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.674 [2024-11-26 20:03:16.804238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.674 [2024-11-26 20:03:16.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101128 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101136 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101152 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101160 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101176 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101184 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101192 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101200 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101208 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101216 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101224 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101232 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101248 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101256 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101264 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101272 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101280 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101288 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101296 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101304 len:8 PRP1 0x0 PRP2 0x0 00:25:30.675 [2024-11-26 20:03:16.804849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.675 [2024-11-26 20:03:16.804856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.675 [2024-11-26 20:03:16.804862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.675 [2024-11-26 20:03:16.804869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101312 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.804884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.804890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.804911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.804916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.804922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.804929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.804937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.804943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.804949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.804956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.804964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.804969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.804975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101344 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.804982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.804989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.804995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101352 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101368 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101408 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101416 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101424 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101432 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101440 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.676 [2024-11-26 20:03:16.805471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.676 [2024-11-26 20:03:16.805477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.676 [2024-11-26 20:03:16.805484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101448 len:8 PRP1 0x0 PRP2 0x0 00:25:30.676 [2024-11-26 20:03:16.805491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.805499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.805504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101456 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.805517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.805525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.805531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.816976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.816982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.816989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.816996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.817024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.817050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.817076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.817102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.677 [2024-11-26 20:03:16.817129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.677 [2024-11-26 20:03:16.817134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.677 [2024-11-26 20:03:16.817140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:25:30.677 [2024-11-26 20:03:16.817147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:16.817200] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:30.678 [2024-11-26 20:03:16.817231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.678 [2024-11-26 20:03:16.817240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:16.817249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.678 [2024-11-26 20:03:16.817257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:16.817265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.678 [2024-11-26 20:03:16.817272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:16.817281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.678 [2024-11-26 20:03:16.817288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:16.817297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:30.678 [2024-11-26 20:03:16.817332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2fda0 (9): Bad file descriptor 00:25:30.678 [2024-11-26 20:03:16.820866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:30.678 [2024-11-26 20:03:16.978714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:30.678 10662.50 IOPS, 41.65 MiB/s [2024-11-26T19:03:31.499Z] 10834.67 IOPS, 42.32 MiB/s [2024-11-26T19:03:31.499Z] 11256.50 IOPS, 43.97 MiB/s [2024-11-26T19:03:31.499Z] [2024-11-26 20:03:20.251076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.678 [2024-11-26 20:03:20.251490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.678 [2024-11-26 20:03:20.251496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.679 [2024-11-26 20:03:20.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.679 [2024-11-26 20:03:20.251952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.679 [2024-11-26 20:03:20.251963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.679 [2024-11-26 20:03:20.251969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.251981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.251986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.251993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.251998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.680 [2024-11-26 20:03:20.252423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-11-26 20:03:20.252429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:20.252600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.681 [2024-11-26 20:03:20.252623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.681 [2024-11-26 20:03:20.252627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69336 len:8 PRP1 0x0 PRP2 0x0 00:25:30.681 [2024-11-26 20:03:20.252633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252668] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:30.681 [2024-11-26 20:03:20.252685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.681 [2024-11-26 20:03:20.252690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.681 [2024-11-26 20:03:20.252701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.681 [2024-11-26 20:03:20.252712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.681 [2024-11-26 20:03:20.252723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:20.252728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:30.681 [2024-11-26 20:03:20.252749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2fda0 (9): Bad file descriptor 00:25:30.681 [2024-11-26 20:03:20.255189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:30.681 [2024-11-26 20:03:20.398961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:30.681 11207.40 IOPS, 43.78 MiB/s [2024-11-26T19:03:31.502Z] 11490.33 IOPS, 44.88 MiB/s [2024-11-26T19:03:31.502Z] 11692.86 IOPS, 45.68 MiB/s [2024-11-26T19:03:31.502Z] 11824.12 IOPS, 46.19 MiB/s [2024-11-26T19:03:31.502Z] [2024-11-26 20:03:24.631801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.681 [2024-11-26 20:03:24.631854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.681 [2024-11-26 20:03:24.631949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.681 [2024-11-26 20:03:24.631955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.631960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.631967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.631972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.631978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.631983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.631990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.631995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.682 [2024-11-26 20:03:24.632308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.682 [2024-11-26 20:03:24.632314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.683 [2024-11-26 20:03:24.632499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.683 [2024-11-26 20:03:24.632718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.683 [2024-11-26 20:03:24.632739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:25:30.683 [2024-11-26 20:03:24.632744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.683 [2024-11-26 20:03:24.632871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.683 [2024-11-26 20:03:24.632877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43536 len:8 PRP1 0x0 PRP2 0x0 00:25:30.683 [2024-11-26 20:03:24.632883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.683 [2024-11-26 20:03:24.632888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.683 [2024-11-26 20:03:24.632892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43544 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.632907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.632911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43552 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.632927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.632930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.632945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.632949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.632963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.632967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43576 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.632982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.632986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.632990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.632995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43632 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43640 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43664 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43672 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43680 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43696 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:25:30.684 [2024-11-26 20:03:24.633276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.684 [2024-11-26 20:03:24.633281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.684 [2024-11-26 20:03:24.633285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.684 [2024-11-26 20:03:24.633289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43728 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43736 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43744 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43752 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43760 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43768 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42936 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.633542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.633546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.633550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.633555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43776 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43784 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43792 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43800 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43808 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43816 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43824 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43832 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43840 len:8 PRP1 0x0 PRP2 0x0 00:25:30.685 [2024-11-26 20:03:24.644599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.685 [2024-11-26 20:03:24.644606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.685 [2024-11-26 20:03:24.644611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.685 [2024-11-26 20:03:24.644617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43848 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43856 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43864 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43872 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43880 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43888 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42872 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43000 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43008 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43016 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43032 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43040 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43048 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.644979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.644985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43056 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.644991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.644999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43064 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43072 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.686 [2024-11-26 20:03:24.645156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:25:30.686 [2024-11-26 20:03:24.645170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.686 [2024-11-26 20:03:24.645178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.686 [2024-11-26 20:03:24.645183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43208 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43216 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.687 [2024-11-26 20:03:24.645792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.687 [2024-11-26 20:03:24.645797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.687 [2024-11-26 20:03:24.645802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:25:30.687 [2024-11-26 20:03:24.645809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.645827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.645856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.645862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.645880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.645886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.645904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.645910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.645928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.645934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.645942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.645946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42880 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42888 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42896 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42904 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42912 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.653969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.653977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.653986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.653995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:25:30.688 [2024-11-26 20:03:24.654398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.688 [2024-11-26 20:03:24.654407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.688 [2024-11-26 20:03:24.654414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.688 [2024-11-26 20:03:24.654422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.689 [2024-11-26 20:03:24.654649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.689 [2024-11-26 20:03:24.654657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:25:30.689 [2024-11-26 20:03:24.654666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654714] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:30.689 [2024-11-26 20:03:24.654749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.689 [2024-11-26 20:03:24.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.689 [2024-11-26 20:03:24.654782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.689 [2024-11-26 20:03:24.654801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.689 [2024-11-26 20:03:24.654820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.689 [2024-11-26 20:03:24.654829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:30.689 [2024-11-26 20:03:24.654864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2fda0 (9): Bad file descriptor 00:25:30.689 [2024-11-26 20:03:24.659319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:30.689 [2024-11-26 20:03:24.724861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:30.689 11804.44 IOPS, 46.11 MiB/s [2024-11-26T19:03:31.510Z] 11929.00 IOPS, 46.60 MiB/s [2024-11-26T19:03:31.510Z] 11997.00 IOPS, 46.86 MiB/s [2024-11-26T19:03:31.510Z] 12071.08 IOPS, 47.15 MiB/s [2024-11-26T19:03:31.510Z] 12131.69 IOPS, 47.39 MiB/s [2024-11-26T19:03:31.510Z] 12178.79 IOPS, 47.57 MiB/s [2024-11-26T19:03:31.510Z] 12224.67 IOPS, 47.75 MiB/s 00:25:30.689 Latency(us) 00:25:30.689 [2024-11-26T19:03:31.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.689 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.689 Verification LBA range: start 0x0 length 0x4000 00:25:30.689 NVMe0n1 : 15.01 12225.25 47.75 1163.35 0.00 9539.12 546.13 28835.84 00:25:30.689 [2024-11-26T19:03:31.510Z] =================================================================================================================== 00:25:30.689 [2024-11-26T19:03:31.510Z] Total : 12225.25 47.75 1163.35 0.00 9539.12 546.13 28835.84 00:25:30.689 Received shutdown signal, test time was about 15.000000 seconds 00:25:30.689 00:25:30.689 Latency(us) 00:25:30.689 [2024-11-26T19:03:31.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.689 [2024-11-26T19:03:31.510Z] =================================================================================================================== 00:25:30.689 [2024-11-26T19:03:31.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3774171 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3774171 /var/tmp/bdevperf.sock 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3774171 ']' 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.689 20:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.260 20:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.260 20:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:31.260 20:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.260 [2024-11-26 20:03:31.985099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.260 20:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:31.519 [2024-11-26 20:03:32.165524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:31.519 20:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:31.779 NVMe0n1 00:25:31.779 20:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.039 00:25:32.039 20:03:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.610 00:25:32.610 20:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.610 20:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:32.610 20:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.870 20:03:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:36.168 20:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.168 20:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:36.168 20:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3775362 00:25:36.168 20:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.168 20:03:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3775362 00:25:37.109 { 00:25:37.109 "results": [ 00:25:37.109 { 00:25:37.109 "job": "NVMe0n1", 00:25:37.109 "core_mask": "0x1", 00:25:37.109 "workload": "verify", 00:25:37.109 "status": "finished", 00:25:37.109 "verify_range": { 00:25:37.109 "start": 0, 00:25:37.109 "length": 16384 00:25:37.109 }, 00:25:37.109 "queue_depth": 128, 00:25:37.109 "io_size": 4096, 00:25:37.109 "runtime": 1.010468, 00:25:37.109 "iops": 12714.900422378541, 00:25:37.109 "mibps": 49.66757977491618, 00:25:37.109 "io_failed": 0, 00:25:37.109 "io_timeout": 0, 00:25:37.109 "avg_latency_us": 10033.202689912827, 00:25:37.109 "min_latency_us": 1911.4666666666667, 00:25:37.109 "max_latency_us": 8574.293333333333 00:25:37.109 } 00:25:37.109 ], 00:25:37.109 "core_count": 1 00:25:37.109 } 00:25:37.109 20:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.109 [2024-11-26 20:03:31.024523] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:25:37.109 [2024-11-26 20:03:31.024583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774171 ] 00:25:37.109 [2024-11-26 20:03:31.106266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.109 [2024-11-26 20:03:31.135305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.109 [2024-11-26 20:03:33.553696] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:37.109 [2024-11-26 20:03:33.553732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.109 [2024-11-26 20:03:33.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.109 [2024-11-26 20:03:33.553748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.109 [2024-11-26 20:03:33.553754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.109 [2024-11-26 20:03:33.553759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.109 [2024-11-26 20:03:33.553765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.109 [2024-11-26 20:03:33.553770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.109 [2024-11-26 20:03:33.553775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.109 [2024-11-26 20:03:33.553781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:37.109 [2024-11-26 20:03:33.553802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:37.109 [2024-11-26 20:03:33.553813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174dda0 (9): Bad file descriptor 00:25:37.109 [2024-11-26 20:03:33.561117] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:37.109 Running I/O for 1 seconds... 00:25:37.109 12679.00 IOPS, 49.53 MiB/s 00:25:37.109 Latency(us) 00:25:37.109 [2024-11-26T19:03:37.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.109 Verification LBA range: start 0x0 length 0x4000 00:25:37.109 NVMe0n1 : 1.01 12714.90 49.67 0.00 0.00 10033.20 1911.47 8574.29 00:25:37.109 [2024-11-26T19:03:37.930Z] =================================================================================================================== 00:25:37.109 [2024-11-26T19:03:37.930Z] Total : 12714.90 49.67 0.00 0.00 10033.20 1911.47 8574.29 00:25:37.109 20:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.109 20:03:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:37.370 20:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.630 20:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.631 20:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:37.631 20:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.891 20:03:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3774171 ']' 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3774171' 00:25:41.250 killing process with pid 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3774171 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:41.250 20:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.511 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.512 rmmod nvme_tcp 00:25:41.512 rmmod nvme_fabrics 00:25:41.512 rmmod nvme_keyring 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3770440 ']' 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3770440 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3770440 ']' 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3770440 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3770440 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3770440' 00:25:41.512 killing process with pid 3770440 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3770440 00:25:41.512 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3770440 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.774 20:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.326 00:25:44.326 real 0m40.702s 00:25:44.326 user 2m5.076s 00:25:44.326 sys 0m8.939s 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.326 ************************************ 00:25:44.326 END TEST nvmf_failover 00:25:44.326 ************************************ 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.326 ************************************ 00:25:44.326 START TEST nvmf_host_discovery 00:25:44.326 ************************************ 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.326 * Looking for test storage... 00:25:44.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:44.326 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.327 --rc genhtml_branch_coverage=1 00:25:44.327 --rc genhtml_function_coverage=1 00:25:44.327 --rc genhtml_legend=1 00:25:44.327 --rc geninfo_all_blocks=1 00:25:44.327 --rc geninfo_unexecuted_blocks=1 00:25:44.327 00:25:44.327 ' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.327 --rc genhtml_branch_coverage=1 00:25:44.327 --rc genhtml_function_coverage=1 00:25:44.327 --rc genhtml_legend=1 00:25:44.327 --rc geninfo_all_blocks=1 00:25:44.327 --rc geninfo_unexecuted_blocks=1 00:25:44.327 00:25:44.327 ' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.327 --rc genhtml_branch_coverage=1 00:25:44.327 --rc genhtml_function_coverage=1 00:25:44.327 --rc genhtml_legend=1 00:25:44.327 --rc geninfo_all_blocks=1 00:25:44.327 --rc geninfo_unexecuted_blocks=1 00:25:44.327 00:25:44.327 ' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:44.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.327 --rc genhtml_branch_coverage=1 00:25:44.327 --rc genhtml_function_coverage=1 00:25:44.327 --rc genhtml_legend=1 00:25:44.327 --rc geninfo_all_blocks=1 00:25:44.327 --rc geninfo_unexecuted_blocks=1 00:25:44.327 00:25:44.327 ' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.327 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.328 20:03:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:52.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:52.478 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.478 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:52.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:52.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:25:52.479 00:25:52.479 --- 10.0.0.2 ping statistics --- 00:25:52.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.479 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:25:52.479 00:25:52.479 --- 10.0.0.1 ping statistics --- 00:25:52.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.479 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3780527 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3780527 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3780527 ']' 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.479 20:03:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.479 [2024-11-26 20:03:52.457088] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:25:52.479 [2024-11-26 20:03:52.457156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.479 [2024-11-26 20:03:52.555995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.479 [2024-11-26 20:03:52.606771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.479 [2024-11-26 20:03:52.606824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.479 [2024-11-26 20:03:52.606832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.479 [2024-11-26 20:03:52.606839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.479 [2024-11-26 20:03:52.606845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.479 [2024-11-26 20:03:52.607675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.479 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.479 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:52.479 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:52.479 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.479 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 [2024-11-26 20:03:53.315404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 [2024-11-26 20:03:53.327662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 null0 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 null1 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3780876 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3780876 /tmp/host.sock 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3780876 ']' 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:52.741 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.741 20:03:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 [2024-11-26 20:03:53.432843] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:25:52.741 [2024-11-26 20:03:53.432909] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780876 ] 00:25:52.741 [2024-11-26 20:03:53.527743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.003 [2024-11-26 20:03:53.580888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.576 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.837 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.838 [2024-11-26 20:03:54.602819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.838 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.098 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:54.099 20:03:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:54.669 [2024-11-26 20:03:55.263247] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.669 [2024-11-26 20:03:55.263267] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.669 [2024-11-26 20:03:55.263280] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.669 [2024-11-26 20:03:55.389711] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:54.669 [2024-11-26 20:03:55.444424] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:54.669 [2024-11-26 20:03:55.445554] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24477f0:1 started. 00:25:54.669 [2024-11-26 20:03:55.447190] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.669 [2024-11-26 20:03:55.447209] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.669 [2024-11-26 20:03:55.452711] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24477f0 was disconnected and freed. delete nvme_qpair. 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.240 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.501 [2024-11-26 20:03:56.294260] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24479d0:1 started. 00:25:55.501 [2024-11-26 20:03:56.304695] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24479d0 was disconnected and freed. delete nvme_qpair. 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:55.761 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 [2024-11-26 20:03:56.383609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:55.762 [2024-11-26 20:03:56.383938] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:55.762 [2024-11-26 20:03:56.383959] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.762 [2024-11-26 20:03:56.512750] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:55.762 20:03:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:56.023 [2024-11-26 20:03:56.775051] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:56.023 [2024-11-26 20:03:56.775092] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.023 [2024-11-26 20:03:56.775101] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.023 [2024-11-26 20:03:56.775106] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.965 [2024-11-26 20:03:57.655509] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:56.965 [2024-11-26 20:03:57.655531] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.965 [2024-11-26 20:03:57.658964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.965 [2024-11-26 20:03:57.658987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.965 [2024-11-26 20:03:57.658997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.965 [2024-11-26 20:03:57.659005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.965 [2024-11-26 20:03:57.659013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.965 [2024-11-26 20:03:57.659021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.965 [2024-11-26 20:03:57.659029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.965 [2024-11-26 20:03:57.659036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.965 [2024-11-26 20:03:57.659044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.965 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.965 [2024-11-26 20:03:57.668976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.965 [2024-11-26 20:03:57.679012] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.965 [2024-11-26 20:03:57.679025] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.965 [2024-11-26 20:03:57.679031] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.965 [2024-11-26 20:03:57.679036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.965 [2024-11-26 20:03:57.679053] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.965 [2024-11-26 20:03:57.679491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.965 [2024-11-26 20:03:57.679529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.965 [2024-11-26 20:03:57.679540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.965 [2024-11-26 20:03:57.679559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.965 [2024-11-26 20:03:57.679589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.679598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.679606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.966 [2024-11-26 20:03:57.679614] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.966 [2024-11-26 20:03:57.679620] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.966 [2024-11-26 20:03:57.679625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.966 [2024-11-26 20:03:57.689085] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.966 [2024-11-26 20:03:57.689099] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.966 [2024-11-26 20:03:57.689104] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.689109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.966 [2024-11-26 20:03:57.689125] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.689550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.966 [2024-11-26 20:03:57.689588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.966 [2024-11-26 20:03:57.689599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.966 [2024-11-26 20:03:57.689617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.966 [2024-11-26 20:03:57.689630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.689637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.689645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.966 [2024-11-26 20:03:57.689652] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.966 [2024-11-26 20:03:57.689658] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.966 [2024-11-26 20:03:57.689662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.966 [2024-11-26 20:03:57.699157] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.966 [2024-11-26 20:03:57.699177] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.966 [2024-11-26 20:03:57.699182] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.699187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.966 [2024-11-26 20:03:57.699203] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.699430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.966 [2024-11-26 20:03:57.699444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.966 [2024-11-26 20:03:57.699452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.966 [2024-11-26 20:03:57.699469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.966 [2024-11-26 20:03:57.699480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.699486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.699494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.966 [2024-11-26 20:03:57.699500] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.966 [2024-11-26 20:03:57.699505] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.966 [2024-11-26 20:03:57.699509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.966 [2024-11-26 20:03:57.709236] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.966 [2024-11-26 20:03:57.709250] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.966 [2024-11-26 20:03:57.709255] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.709259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.966 [2024-11-26 20:03:57.709275] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.709617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.966 [2024-11-26 20:03:57.709630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.966 [2024-11-26 20:03:57.709638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.966 [2024-11-26 20:03:57.709649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.966 [2024-11-26 20:03:57.709660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.709667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.709675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.966 [2024-11-26 20:03:57.709681] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.966 [2024-11-26 20:03:57.709686] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.966 [2024-11-26 20:03:57.709691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.966 [2024-11-26 20:03:57.719306] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.966 [2024-11-26 20:03:57.719319] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.966 [2024-11-26 20:03:57.719325] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.719330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.966 [2024-11-26 20:03:57.719344] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.966 [2024-11-26 20:03:57.719620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.966 [2024-11-26 20:03:57.719632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.966 [2024-11-26 20:03:57.719640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.966 [2024-11-26 20:03:57.719651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.966 [2024-11-26 20:03:57.719661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.719667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.719675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.966 [2024-11-26 20:03:57.719680] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.966 [2024-11-26 20:03:57.719685] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.966 [2024-11-26 20:03:57.719690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.966 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.966 [2024-11-26 20:03:57.729376] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.966 [2024-11-26 20:03:57.729391] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.966 [2024-11-26 20:03:57.729395] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.729400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.966 [2024-11-26 20:03:57.729415] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.966 [2024-11-26 20:03:57.729695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.966 [2024-11-26 20:03:57.729707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.966 [2024-11-26 20:03:57.729715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.966 [2024-11-26 20:03:57.729726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.966 [2024-11-26 20:03:57.729742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.966 [2024-11-26 20:03:57.729753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.966 [2024-11-26 20:03:57.729761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.967 [2024-11-26 20:03:57.729767] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.967 [2024-11-26 20:03:57.729772] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.967 [2024-11-26 20:03:57.729776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.967 [2024-11-26 20:03:57.739445] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.967 [2024-11-26 20:03:57.739457] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.967 [2024-11-26 20:03:57.739462] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.967 [2024-11-26 20:03:57.739466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.967 [2024-11-26 20:03:57.739480] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.967 [2024-11-26 20:03:57.739760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.967 [2024-11-26 20:03:57.739771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2417e10 with addr=10.0.0.2, port=4420 00:25:56.967 [2024-11-26 20:03:57.739778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2417e10 is same with the state(6) to be set 00:25:56.967 [2024-11-26 20:03:57.739789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2417e10 (9): Bad file descriptor 00:25:56.967 [2024-11-26 20:03:57.739811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.967 [2024-11-26 20:03:57.739818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.967 [2024-11-26 20:03:57.739825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.967 [2024-11-26 20:03:57.739831] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.967 [2024-11-26 20:03:57.739836] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.967 [2024-11-26 20:03:57.739840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.967 [2024-11-26 20:03:57.743828] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:56.967 [2024-11-26 20:03:57.743846] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.967 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:57.228 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.229 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.229 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.229 20:03:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.229 20:03:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.612 [2024-11-26 20:03:59.104352] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:58.612 [2024-11-26 20:03:59.104367] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:58.612 [2024-11-26 20:03:59.104376] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.612 [2024-11-26 20:03:59.192630] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:58.873 [2024-11-26 20:03:59.501037] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:58.873 [2024-11-26 20:03:59.501696] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2453540:1 started. 00:25:58.873 [2024-11-26 20:03:59.503086] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.873 [2024-11-26 20:03:59.503110] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.873 [2024-11-26 20:03:59.512804] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2453540 was disconnected and freed. delete nvme_qpair. 00:25:58.873 request: 00:25:58.873 { 00:25:58.873 "name": "nvme", 00:25:58.873 "trtype": "tcp", 00:25:58.873 "traddr": "10.0.0.2", 00:25:58.873 "adrfam": "ipv4", 00:25:58.873 "trsvcid": "8009", 00:25:58.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.873 "wait_for_attach": true, 00:25:58.873 "method": "bdev_nvme_start_discovery", 00:25:58.873 "req_id": 1 00:25:58.873 } 00:25:58.873 Got JSON-RPC error response 00:25:58.873 response: 00:25:58.873 { 00:25:58.873 "code": -17, 00:25:58.873 "message": "File exists" 00:25:58.873 } 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.873 request: 00:25:58.873 { 00:25:58.873 "name": "nvme_second", 00:25:58.873 "trtype": "tcp", 00:25:58.873 "traddr": "10.0.0.2", 00:25:58.873 "adrfam": "ipv4", 00:25:58.873 "trsvcid": "8009", 00:25:58.873 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.873 "wait_for_attach": true, 00:25:58.873 "method": "bdev_nvme_start_discovery", 00:25:58.873 "req_id": 1 00:25:58.873 } 00:25:58.873 Got JSON-RPC error response 00:25:58.873 response: 00:25:58.873 { 00:25:58.873 "code": -17, 00:25:58.873 "message": "File exists" 00:25:58.873 } 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.873 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.134 20:03:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 [2024-11-26 20:04:00.768170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.073 [2024-11-26 20:04:00.768203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2561aa0 with addr=10.0.0.2, port=8010 00:26:00.073 [2024-11-26 20:04:00.768215] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:00.073 [2024-11-26 20:04:00.768221] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:00.073 [2024-11-26 20:04:00.768226] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.013 [2024-11-26 20:04:01.770515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.013 [2024-11-26 20:04:01.770535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2561aa0 with addr=10.0.0.2, port=8010 00:26:01.013 [2024-11-26 20:04:01.770545] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:01.013 [2024-11-26 20:04:01.770549] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:01.013 [2024-11-26 20:04:01.770554] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.955 [2024-11-26 20:04:02.772501] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:02.217 request: 00:26:02.217 { 00:26:02.217 "name": "nvme_second", 00:26:02.217 "trtype": "tcp", 00:26:02.217 "traddr": "10.0.0.2", 00:26:02.217 "adrfam": "ipv4", 00:26:02.217 "trsvcid": "8010", 00:26:02.217 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:02.217 "wait_for_attach": false, 00:26:02.217 "attach_timeout_ms": 3000, 00:26:02.217 "method": "bdev_nvme_start_discovery", 00:26:02.217 "req_id": 1 00:26:02.217 } 00:26:02.217 Got JSON-RPC error response 00:26:02.217 response: 00:26:02.217 { 00:26:02.217 "code": -110, 00:26:02.217 "message": "Connection timed out" 00:26:02.217 } 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3780876 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.217 rmmod nvme_tcp 00:26:02.217 rmmod nvme_fabrics 00:26:02.217 rmmod nvme_keyring 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3780527 ']' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3780527 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3780527 ']' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3780527 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3780527 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3780527' 00:26:02.217 killing process with pid 3780527 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3780527 00:26:02.217 20:04:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3780527 00:26:02.478 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.479 20:04:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.392 00:26:04.392 real 0m20.545s 00:26:04.392 user 0m23.867s 00:26:04.392 sys 0m7.357s 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.392 ************************************ 00:26:04.392 END TEST nvmf_host_discovery 00:26:04.392 ************************************ 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.392 20:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.654 ************************************ 00:26:04.654 START TEST nvmf_host_multipath_status 00:26:04.654 ************************************ 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:04.654 * Looking for test storage... 00:26:04.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.654 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.655 --rc genhtml_branch_coverage=1 00:26:04.655 --rc genhtml_function_coverage=1 00:26:04.655 --rc genhtml_legend=1 00:26:04.655 --rc geninfo_all_blocks=1 00:26:04.655 --rc geninfo_unexecuted_blocks=1 00:26:04.655 00:26:04.655 ' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.655 --rc genhtml_branch_coverage=1 00:26:04.655 --rc genhtml_function_coverage=1 00:26:04.655 --rc genhtml_legend=1 00:26:04.655 --rc geninfo_all_blocks=1 00:26:04.655 --rc geninfo_unexecuted_blocks=1 00:26:04.655 00:26:04.655 ' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.655 --rc genhtml_branch_coverage=1 00:26:04.655 --rc genhtml_function_coverage=1 00:26:04.655 --rc genhtml_legend=1 00:26:04.655 --rc geninfo_all_blocks=1 00:26:04.655 --rc geninfo_unexecuted_blocks=1 00:26:04.655 00:26:04.655 ' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.655 --rc genhtml_branch_coverage=1 00:26:04.655 --rc genhtml_function_coverage=1 00:26:04.655 --rc genhtml_legend=1 00:26:04.655 --rc geninfo_all_blocks=1 00:26:04.655 --rc geninfo_unexecuted_blocks=1 00:26:04.655 00:26:04.655 ' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:04.655 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:04.916 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.916 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.916 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.916 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.916 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.917 20:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.059 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.059 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.059 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.059 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.059 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:26:13.060 00:26:13.060 --- 10.0.0.2 ping statistics --- 00:26:13.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.060 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:26:13.060 00:26:13.060 --- 10.0.0.1 ping statistics --- 00:26:13.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.060 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3787055 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3787055 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3787055 ']' 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.060 20:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.060 [2024-11-26 20:04:13.048602] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:26:13.060 [2024-11-26 20:04:13.048668] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.060 [2024-11-26 20:04:13.150800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.060 [2024-11-26 20:04:13.202676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.060 [2024-11-26 20:04:13.202732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.060 [2024-11-26 20:04:13.202741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.060 [2024-11-26 20:04:13.202748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.060 [2024-11-26 20:04:13.202755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.060 [2024-11-26 20:04:13.204346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.060 [2024-11-26 20:04:13.204351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3787055 00:26:13.321 20:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:13.321 [2024-11-26 20:04:14.085193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.321 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:13.581 Malloc0 00:26:13.581 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:13.842 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.101 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.361 [2024-11-26 20:04:14.925780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.361 20:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.361 [2024-11-26 20:04:15.126266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3787421 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3787421 /var/tmp/bdevperf.sock 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3787421 ']' 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.361 20:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.372 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.372 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:15.372 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.727 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.987 Nvme0n1 00:26:15.987 20:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.247 Nvme0n1 00:26:16.247 20:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:16.247 20:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:18.791 20:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:18.791 20:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.791 20:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.791 20:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:19.732 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:19.732 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.732 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.732 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.992 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.253 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.253 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.253 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.253 20:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.514 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.775 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.775 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.775 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.775 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:20.775 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.035 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.035 20:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:22.420 20:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:22.420 20:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.420 20:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.420 20:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.420 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.682 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.682 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.682 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.682 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.942 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.942 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.942 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.942 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:23.204 20:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.464 20:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.724 20:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:24.667 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:24.667 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.667 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.667 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.928 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.189 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.189 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.189 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.189 20:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.450 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.711 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.711 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.711 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.711 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:25.711 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.972 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:25.972 20:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.357 20:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.357 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.357 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.357 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.357 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.618 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.618 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.618 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.618 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.879 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.879 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.879 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.879 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:28.138 20:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.399 20:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.659 20:04:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:29.602 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:29.602 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.602 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.602 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.861 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.119 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.119 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.119 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.119 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.378 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.378 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.378 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.378 20:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.378 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.378 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.378 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.378 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.638 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.638 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:30.638 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:30.898 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.899 20:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.282 20:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.282 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.282 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.282 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.282 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.543 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.543 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.543 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.543 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.803 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.803 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.803 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.803 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.063 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.323 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.323 20:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.584 20:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.584 20:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.965 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.225 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.225 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.225 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.225 20:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.486 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.486 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.486 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.486 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:35.747 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.008 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.268 20:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.212 20:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.212 20:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.212 20:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.212 20:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.472 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.732 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.732 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.732 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.732 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.992 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.252 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.252 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:38.252 20:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.512 20:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:38.512 20:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.893 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.153 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.153 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.153 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.153 20:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.414 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.414 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.414 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.414 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:40.674 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.935 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:41.194 20:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:42.137 20:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:42.137 20:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:42.137 20:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.137 20:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.398 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.659 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.659 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.659 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.659 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.920 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.920 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.920 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.920 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3787421 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3787421 ']' 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3787421 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787421 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787421' 00:26:43.182 killing process with pid 3787421 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3787421 00:26:43.182 20:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3787421 00:26:43.465 { 00:26:43.465 "results": [ 00:26:43.465 { 00:26:43.465 "job": "Nvme0n1", 00:26:43.465 "core_mask": "0x4", 00:26:43.465 "workload": "verify", 00:26:43.466 "status": "terminated", 00:26:43.466 "verify_range": { 00:26:43.466 "start": 0, 00:26:43.466 "length": 16384 00:26:43.466 }, 00:26:43.466 "queue_depth": 128, 00:26:43.466 "io_size": 4096, 00:26:43.466 "runtime": 26.85586, 00:26:43.466 "iops": 11843.150805820405, 00:26:43.466 "mibps": 46.26230783523596, 00:26:43.466 "io_failed": 0, 00:26:43.466 "io_timeout": 0, 00:26:43.466 "avg_latency_us": 10790.08581327934, 00:26:43.466 "min_latency_us": 283.3066666666667, 00:26:43.466 "max_latency_us": 3019898.88 00:26:43.466 } 00:26:43.466 ], 00:26:43.466 "core_count": 1 00:26:43.466 } 00:26:43.466 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3787421 00:26:43.466 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.466 [2024-11-26 20:04:15.205087] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:26:43.466 [2024-11-26 20:04:15.205176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787421 ] 00:26:43.466 [2024-11-26 20:04:15.298519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.466 [2024-11-26 20:04:15.348346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.466 Running I/O for 90 seconds... 00:26:43.466 10309.00 IOPS, 40.27 MiB/s [2024-11-26T19:04:44.287Z] 10697.50 IOPS, 41.79 MiB/s [2024-11-26T19:04:44.287Z] 10847.00 IOPS, 42.37 MiB/s [2024-11-26T19:04:44.287Z] 11246.50 IOPS, 43.93 MiB/s [2024-11-26T19:04:44.287Z] 11564.80 IOPS, 45.17 MiB/s [2024-11-26T19:04:44.287Z] 11768.17 IOPS, 45.97 MiB/s [2024-11-26T19:04:44.287Z] 11956.71 IOPS, 46.71 MiB/s [2024-11-26T19:04:44.287Z] 12059.12 IOPS, 47.11 MiB/s [2024-11-26T19:04:44.287Z] 12146.56 IOPS, 47.45 MiB/s [2024-11-26T19:04:44.287Z] 12216.30 IOPS, 47.72 MiB/s [2024-11-26T19:04:44.287Z] 12285.55 IOPS, 47.99 MiB/s [2024-11-26T19:04:44.287Z] [2024-11-26 20:04:29.049924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.466 [2024-11-26 20:04:29.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.049993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.466 [2024-11-26 20:04:29.050821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.466 [2024-11-26 20:04:29.050832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.050983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.050994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.467 [2024-11-26 20:04:29.051379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.467 [2024-11-26 20:04:29.051392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.467 [2024-11-26 20:04:29.051398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.051991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.051996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.468 [2024-11-26 20:04:29.052010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.468 [2024-11-26 20:04:29.052015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.469 [2024-11-26 20:04:29.052671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.469 [2024-11-26 20:04:29.052687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:29.052988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:29.052992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.470 12258.00 IOPS, 47.88 MiB/s [2024-11-26T19:04:44.291Z] 11315.08 IOPS, 44.20 MiB/s [2024-11-26T19:04:44.291Z] 10506.86 IOPS, 41.04 MiB/s [2024-11-26T19:04:44.291Z] 9859.93 IOPS, 38.52 MiB/s [2024-11-26T19:04:44.291Z] 10037.81 IOPS, 39.21 MiB/s [2024-11-26T19:04:44.291Z] 10195.12 IOPS, 39.82 MiB/s [2024-11-26T19:04:44.291Z] 10541.00 IOPS, 41.18 MiB/s [2024-11-26T19:04:44.291Z] 10867.11 IOPS, 42.45 MiB/s [2024-11-26T19:04:44.291Z] 11080.25 IOPS, 43.28 MiB/s [2024-11-26T19:04:44.291Z] 11153.81 IOPS, 43.57 MiB/s [2024-11-26T19:04:44.291Z] 11222.73 IOPS, 43.84 MiB/s [2024-11-26T19:04:44.291Z] 11422.74 IOPS, 44.62 MiB/s [2024-11-26T19:04:44.291Z] 11644.75 IOPS, 45.49 MiB/s [2024-11-26T19:04:44.291Z] [2024-11-26 20:04:41.798523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.470 [2024-11-26 20:04:41.798557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.470 [2024-11-26 20:04:41.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.470 [2024-11-26 20:04:41.798756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.470 [2024-11-26 20:04:41.798783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.470 [2024-11-26 20:04:41.798789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.798878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.798883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.471 [2024-11-26 20:04:41.799384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.471 [2024-11-26 20:04:41.799389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.799578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.800284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.800300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.472 [2024-11-26 20:04:41.800380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.472 [2024-11-26 20:04:41.800488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.472 [2024-11-26 20:04:41.800499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.800519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.800535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.800550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.800643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.800700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.800705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.473 [2024-11-26 20:04:41.801515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.473 [2024-11-26 20:04:41.801603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.473 [2024-11-26 20:04:41.801608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.801847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.801892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.801909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.801920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.474 [2024-11-26 20:04:41.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.474 [2024-11-26 20:04:41.802655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.474 [2024-11-26 20:04:41.802666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.802861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.802872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.803836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.803854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.803872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.803888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.803903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.803991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.803997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.804012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.804027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.475 [2024-11-26 20:04:41.804043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.804085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.475 [2024-11-26 20:04:41.804090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.475 [2024-11-26 20:04:41.805274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.476 [2024-11-26 20:04:41.805724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.476 [2024-11-26 20:04:41.805765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.476 [2024-11-26 20:04:41.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.805786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.805801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.805818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.805834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.805849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.805859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.805865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.807821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.807965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.807980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.807990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.807996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.808011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.808027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.808042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.477 [2024-11-26 20:04:41.808058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.477 [2024-11-26 20:04:41.808073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.477 [2024-11-26 20:04:41.808083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.808386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.808474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.808479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.810073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.478 [2024-11-26 20:04:41.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.810110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.810125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.810140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.478 [2024-11-26 20:04:41.810150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.478 [2024-11-26 20:04:41.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.479 [2024-11-26 20:04:41.810623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.479 [2024-11-26 20:04:41.810638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.479 [2024-11-26 20:04:41.810649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.811590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.811985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.811990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.812005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.812023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.480 [2024-11-26 20:04:41.812039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.812054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.812069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.812084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.480 [2024-11-26 20:04:41.812095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.480 [2024-11-26 20:04:41.812100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.812374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.812384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.812389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.481 [2024-11-26 20:04:41.813131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.481 [2024-11-26 20:04:41.813254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.481 [2024-11-26 20:04:41.813259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.813289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.813304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.813320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.813335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.813351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.482 [2024-11-26 20:04:41.814850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.482 [2024-11-26 20:04:41.814885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.482 [2024-11-26 20:04:41.814895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.814987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.815007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.815022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.815037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.815054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.815070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.815085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.815100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.815111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.815116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.483 [2024-11-26 20:04:41.816408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.483 [2024-11-26 20:04:41.816418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.483 [2024-11-26 20:04:41.816423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.816718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.816759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.816765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.818740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-11-26 20:04:41.818757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.484 [2024-11-26 20:04:41.818848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-11-26 20:04:41.818853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.818946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.818962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.818988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.818993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.819300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.819306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.820048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.820060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-11-26 20:04:41.820065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-11-26 20:04:41.820076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-11-26 20:04:41.820081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.486 [2024-11-26 20:04:41.820966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.486 [2024-11-26 20:04:41.820977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-11-26 20:04:41.820982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.821977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.821987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.821992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.822055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.822116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.822131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-11-26 20:04:41.822146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-11-26 20:04:41.822167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.487 [2024-11-26 20:04:41.822761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.822771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.822787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.822835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.822927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.822958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.822969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.822974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.824293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.824309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.488 [2024-11-26 20:04:41.824324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-11-26 20:04:41.824385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.488 [2024-11-26 20:04:41.824396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.489 [2024-11-26 20:04:41.824801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.489 [2024-11-26 20:04:41.824844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.489 [2024-11-26 20:04:41.824849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.824859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.824874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.824879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.824889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.824894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.824905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.824910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.825499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.825616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.825621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.826697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.490 [2024-11-26 20:04:41.826712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.490 [2024-11-26 20:04:41.826774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.490 [2024-11-26 20:04:41.826784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.826900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.826991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.827069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.827116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.827131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.827166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.827176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.827181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.828024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.828034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.828045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.828053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.828063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.828068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.828079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.491 [2024-11-26 20:04:41.828084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.491 [2024-11-26 20:04:41.828094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.491 [2024-11-26 20:04:41.828099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.828817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.828842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.492 [2024-11-26 20:04:41.828847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.830199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.492 [2024-11-26 20:04:41.830224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.492 [2024-11-26 20:04:41.830229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.493 [2024-11-26 20:04:41.830666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.493 [2024-11-26 20:04:41.830692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.493 [2024-11-26 20:04:41.830697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.830712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.830722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.830727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.830738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.830743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.830753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.830758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.831686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.831703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.831719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.831930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.494 [2024-11-26 20:04:41.831936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.832199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.832208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.832219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.832225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.494 [2024-11-26 20:04:41.832235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.494 [2024-11-26 20:04:41.832240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.832459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.832501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.832506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.833681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.833698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.833714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.833730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.495 [2024-11-26 20:04:41.833856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.495 [2024-11-26 20:04:41.833872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.495 [2024-11-26 20:04:41.833882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.833887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.833903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.833918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.833934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.833950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.833966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.833981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.833991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.833996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.834235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.834245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.834250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.835712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.835728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.835744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.496 [2024-11-26 20:04:41.835759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.835777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.496 [2024-11-26 20:04:41.835787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.496 [2024-11-26 20:04:41.835792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.835944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.835987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.835991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.497 [2024-11-26 20:04:41.836957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.497 [2024-11-26 20:04:41.836968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.497 [2024-11-26 20:04:41.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.836983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.836988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.836999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.498 [2024-11-26 20:04:41.837895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.498 [2024-11-26 20:04:41.837925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.498 [2024-11-26 20:04:41.837936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.837941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.837951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.837956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.837967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.837972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.837982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.837987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.838950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.838984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.838994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.499 [2024-11-26 20:04:41.839000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.839010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.839015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.839026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.839031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.499 [2024-11-26 20:04:41.839447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.499 [2024-11-26 20:04:41.839456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.839706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.839731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.839737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.840167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.500 [2024-11-26 20:04:41.840183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.500 [2024-11-26 20:04:41.840307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.500 [2024-11-26 20:04:41.840317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.840322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.840431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.840441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.840447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.841149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.841200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.841215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.841231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.841246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.841257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.841262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.842082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.501 [2024-11-26 20:04:41.842098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.842113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.842131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.842146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.501 [2024-11-26 20:04:41.842156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.501 [2024-11-26 20:04:41.842165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.842520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.842531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.842536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.844140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.844157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.844177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.844207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.844222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.502 [2024-11-26 20:04:41.844237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.502 [2024-11-26 20:04:41.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.502 [2024-11-26 20:04:41.844252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.844639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.844686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.845620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.503 [2024-11-26 20:04:41.845631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.845643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.845648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.845658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.503 [2024-11-26 20:04:41.845664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.503 [2024-11-26 20:04:41.845674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.845679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.845695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.845710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.845726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.845831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.845836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.504 [2024-11-26 20:04:41.846676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.504 [2024-11-26 20:04:41.846686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.504 [2024-11-26 20:04:41.846691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.846753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.846794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.846800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.847956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.505 [2024-11-26 20:04:41.847986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.847997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.848012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.848017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.848027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.848032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.505 [2024-11-26 20:04:41.848042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.505 [2024-11-26 20:04:41.848047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.848063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.848079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.848126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.848188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.848193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.849432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.849538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.849543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.506 [2024-11-26 20:04:41.850623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.506 [2024-11-26 20:04:41.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.506 [2024-11-26 20:04:41.850669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.850932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.850990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.850995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.851011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.851026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.851042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.851057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.507 [2024-11-26 20:04:41.851073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.851088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.851104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.851119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.851130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.851135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.853033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.853047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.853072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.853078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.853088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.853093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.853103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.507 [2024-11-26 20:04:41.853108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.507 [2024-11-26 20:04:41.853118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.508 [2024-11-26 20:04:41.853123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.508 [2024-11-26 20:04:41.853138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.508 [2024-11-26 20:04:41.853250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.508 [2024-11-26 20:04:41.853267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.508 [2024-11-26 20:04:41.853282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.508 [2024-11-26 20:04:41.853293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.508 [2024-11-26 20:04:41.853298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.508 11787.80 IOPS, 46.05 MiB/s [2024-11-26T19:04:44.329Z] 11822.31 IOPS, 46.18 MiB/s [2024-11-26T19:04:44.329Z] Received shutdown signal, test time was about 26.856469 seconds 00:26:43.508 00:26:43.508 Latency(us) 00:26:43.508 [2024-11-26T19:04:44.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.508 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.508 Verification LBA range: start 0x0 length 0x4000 00:26:43.508 Nvme0n1 : 26.86 11843.15 46.26 0.00 0.00 10790.09 283.31 3019898.88 00:26:43.508 [2024-11-26T19:04:44.329Z] =================================================================================================================== 00:26:43.508 [2024-11-26T19:04:44.329Z] Total : 11843.15 46.26 0.00 0.00 10790.09 283.31 3019898.88 00:26:43.508 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.770 rmmod nvme_tcp 00:26:43.770 rmmod nvme_fabrics 00:26:43.770 rmmod nvme_keyring 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3787055 ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3787055 ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3787055' 00:26:43.770 killing process with pid 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3787055 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.770 20:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.317 20:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.318 00:26:46.318 real 0m41.402s 00:26:46.318 user 1m47.015s 00:26:46.318 sys 0m11.648s 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.318 ************************************ 00:26:46.318 END TEST nvmf_host_multipath_status 00:26:46.318 ************************************ 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.318 ************************************ 00:26:46.318 START TEST nvmf_discovery_remove_ifc 00:26:46.318 ************************************ 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.318 * Looking for test storage... 00:26:46.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.318 --rc genhtml_branch_coverage=1 00:26:46.318 --rc genhtml_function_coverage=1 00:26:46.318 --rc genhtml_legend=1 00:26:46.318 --rc geninfo_all_blocks=1 00:26:46.318 --rc geninfo_unexecuted_blocks=1 00:26:46.318 00:26:46.318 ' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.318 --rc genhtml_branch_coverage=1 00:26:46.318 --rc genhtml_function_coverage=1 00:26:46.318 --rc genhtml_legend=1 00:26:46.318 --rc geninfo_all_blocks=1 00:26:46.318 --rc geninfo_unexecuted_blocks=1 00:26:46.318 00:26:46.318 ' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.318 --rc genhtml_branch_coverage=1 00:26:46.318 --rc genhtml_function_coverage=1 00:26:46.318 --rc genhtml_legend=1 00:26:46.318 --rc geninfo_all_blocks=1 00:26:46.318 --rc geninfo_unexecuted_blocks=1 00:26:46.318 00:26:46.318 ' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:46.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.318 --rc genhtml_branch_coverage=1 00:26:46.318 --rc genhtml_function_coverage=1 00:26:46.318 --rc genhtml_legend=1 00:26:46.318 --rc geninfo_all_blocks=1 00:26:46.318 --rc geninfo_unexecuted_blocks=1 00:26:46.318 00:26:46.318 ' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.318 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.319 20:04:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.466 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.466 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.466 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.466 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.467 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:26:54.467 00:26:54.467 --- 10.0.0.2 ping statistics --- 00:26:54.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.467 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:26:54.467 00:26:54.467 --- 10.0.0.1 ping statistics --- 00:26:54.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.467 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3797329 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3797329 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3797329 ']' 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.467 20:04:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.467 [2024-11-26 20:04:54.516895] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:26:54.467 [2024-11-26 20:04:54.516962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.467 [2024-11-26 20:04:54.617791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.467 [2024-11-26 20:04:54.668208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.467 [2024-11-26 20:04:54.668259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.467 [2024-11-26 20:04:54.668267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.467 [2024-11-26 20:04:54.668274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.467 [2024-11-26 20:04:54.668280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.467 [2024-11-26 20:04:54.669054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.730 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.730 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.731 [2024-11-26 20:04:55.384841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.731 [2024-11-26 20:04:55.393094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:54.731 null0 00:26:54.731 [2024-11-26 20:04:55.425044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3797672 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3797672 /tmp/host.sock 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3797672 ']' 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:54.731 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.731 20:04:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.731 [2024-11-26 20:04:55.502444] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:26:54.731 [2024-11-26 20:04:55.502507] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797672 ] 00:26:54.993 [2024-11-26 20:04:55.593625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.993 [2024-11-26 20:04:55.647305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.566 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.828 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.828 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:55.828 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.828 20:04:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.772 [2024-11-26 20:04:57.454685] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:56.772 [2024-11-26 20:04:57.454705] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:56.772 [2024-11-26 20:04:57.454717] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.772 [2024-11-26 20:04:57.540998] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:57.034 [2024-11-26 20:04:57.602935] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:57.034 [2024-11-26 20:04:57.603774] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f41410:1 started. 00:26:57.034 [2024-11-26 20:04:57.605370] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:57.035 [2024-11-26 20:04:57.605411] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:57.035 [2024-11-26 20:04:57.605433] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:57.035 [2024-11-26 20:04:57.605447] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:57.035 [2024-11-26 20:04:57.605467] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.035 [2024-11-26 20:04:57.652791] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f41410 was disconnected and freed. delete nvme_qpair. 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.035 20:04:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.420 20:04:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.363 20:04:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.303 20:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.245 20:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.245 20:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.245 20:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.245 20:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.257 [2024-11-26 20:05:03.046167] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:02.257 [2024-11-26 20:05:03.046206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.257 [2024-11-26 20:05:03.046215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.257 [2024-11-26 20:05:03.046223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.257 [2024-11-26 20:05:03.046228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.257 [2024-11-26 20:05:03.046234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.257 [2024-11-26 20:05:03.046239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.257 [2024-11-26 20:05:03.046245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.257 [2024-11-26 20:05:03.046250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.257 [2024-11-26 20:05:03.046256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.257 [2024-11-26 20:05:03.046261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.257 [2024-11-26 20:05:03.046266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1dc50 is same with the state(6) to be set 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.257 [2024-11-26 20:05:03.056184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1dc50 (9): Bad file descriptor 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.257 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.257 [2024-11-26 20:05:03.066219] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:02.257 [2024-11-26 20:05:03.066231] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:02.257 [2024-11-26 20:05:03.066234] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:02.257 [2024-11-26 20:05:03.066239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:02.257 [2024-11-26 20:05:03.066257] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:02.530 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.530 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.530 20:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.473 [2024-11-26 20:05:04.091209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:03.473 [2024-11-26 20:05:04.091299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dc50 with addr=10.0.0.2, port=4420 00:27:03.473 [2024-11-26 20:05:04.091331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1dc50 is same with the state(6) to be set 00:27:03.473 [2024-11-26 20:05:04.091385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1dc50 (9): Bad file descriptor 00:27:03.473 [2024-11-26 20:05:04.091508] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:03.473 [2024-11-26 20:05:04.091566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:03.473 [2024-11-26 20:05:04.091589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:03.473 [2024-11-26 20:05:04.091612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:03.473 [2024-11-26 20:05:04.091633] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:03.473 [2024-11-26 20:05:04.091650] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:03.473 [2024-11-26 20:05:04.091664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:03.473 [2024-11-26 20:05:04.091687] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.473 [2024-11-26 20:05:04.091702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.473 20:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.415 [2024-11-26 20:05:05.094109] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:04.415 [2024-11-26 20:05:05.094125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:04.415 [2024-11-26 20:05:05.094134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:04.415 [2024-11-26 20:05:05.094139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:04.415 [2024-11-26 20:05:05.094144] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:04.415 [2024-11-26 20:05:05.094149] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:04.415 [2024-11-26 20:05:05.094153] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:04.415 [2024-11-26 20:05:05.094156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:04.415 [2024-11-26 20:05:05.094180] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:04.415 [2024-11-26 20:05:05.094196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.415 [2024-11-26 20:05:05.094203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.415 [2024-11-26 20:05:05.094210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.415 [2024-11-26 20:05:05.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.416 [2024-11-26 20:05:05.094221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.416 [2024-11-26 20:05:05.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.416 [2024-11-26 20:05:05.094232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.416 [2024-11-26 20:05:05.094237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.416 [2024-11-26 20:05:05.094243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.416 [2024-11-26 20:05:05.094248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.416 [2024-11-26 20:05:05.094253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:04.416 [2024-11-26 20:05:05.094937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f0d350 (9): Bad file descriptor 00:27:04.416 [2024-11-26 20:05:05.095947] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:04.416 [2024-11-26 20:05:05.095955] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.416 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:04.677 20:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.618 20:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.558 [2024-11-26 20:05:07.108371] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.558 [2024-11-26 20:05:07.108384] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.558 [2024-11-26 20:05:07.108393] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.558 [2024-11-26 20:05:07.196658] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:06.818 [2024-11-26 20:05:07.378669] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:06.818 [2024-11-26 20:05:07.379505] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1ef6eb0:1 started. 00:27:06.818 [2024-11-26 20:05:07.380440] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:06.818 [2024-11-26 20:05:07.380469] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:06.818 [2024-11-26 20:05:07.380484] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:06.818 [2024-11-26 20:05:07.380495] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:06.818 [2024-11-26 20:05:07.380502] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:06.818 [2024-11-26 20:05:07.386492] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1ef6eb0 was disconnected and freed. delete nvme_qpair. 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3797672 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3797672 ']' 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3797672 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3797672 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3797672' 00:27:06.818 killing process with pid 3797672 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3797672 00:27:06.818 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3797672 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.078 rmmod nvme_tcp 00:27:07.078 rmmod nvme_fabrics 00:27:07.078 rmmod nvme_keyring 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3797329 ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3797329 ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3797329' 00:27:07.078 killing process with pid 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3797329 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.078 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.339 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.339 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.339 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.339 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.339 20:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.254 20:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.254 00:27:09.254 real 0m23.262s 00:27:09.254 user 0m27.197s 00:27:09.254 sys 0m7.154s 00:27:09.254 20:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.254 20:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 ************************************ 00:27:09.254 END TEST nvmf_discovery_remove_ifc 00:27:09.254 ************************************ 00:27:09.254 20:05:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:09.254 20:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.254 20:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.254 20:05:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 ************************************ 00:27:09.254 START TEST nvmf_identify_kernel_target 00:27:09.254 ************************************ 00:27:09.254 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:09.516 * Looking for test storage... 00:27:09.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.516 --rc genhtml_branch_coverage=1 00:27:09.516 --rc genhtml_function_coverage=1 00:27:09.516 --rc genhtml_legend=1 00:27:09.516 --rc geninfo_all_blocks=1 00:27:09.516 --rc geninfo_unexecuted_blocks=1 00:27:09.516 00:27:09.516 ' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.516 --rc genhtml_branch_coverage=1 00:27:09.516 --rc genhtml_function_coverage=1 00:27:09.516 --rc genhtml_legend=1 00:27:09.516 --rc geninfo_all_blocks=1 00:27:09.516 --rc geninfo_unexecuted_blocks=1 00:27:09.516 00:27:09.516 ' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.516 --rc genhtml_branch_coverage=1 00:27:09.516 --rc genhtml_function_coverage=1 00:27:09.516 --rc genhtml_legend=1 00:27:09.516 --rc geninfo_all_blocks=1 00:27:09.516 --rc geninfo_unexecuted_blocks=1 00:27:09.516 00:27:09.516 ' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.516 --rc genhtml_branch_coverage=1 00:27:09.516 --rc genhtml_function_coverage=1 00:27:09.516 --rc genhtml_legend=1 00:27:09.516 --rc geninfo_all_blocks=1 00:27:09.516 --rc geninfo_unexecuted_blocks=1 00:27:09.516 00:27:09.516 ' 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.516 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:09.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:09.517 20:05:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:17.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:17.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.663 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:17.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:17.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:27:17.664 00:27:17.664 --- 10.0.0.2 ping statistics --- 00:27:17.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.664 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:27:17.664 00:27:17.664 --- 10.0.0.1 ping statistics --- 00:27:17.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.664 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:17.664 20:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:20.969 Waiting for block devices as requested 00:27:20.969 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:20.969 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:20.969 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:20.969 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:20.969 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:20.969 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:21.231 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:21.231 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:21.231 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:21.492 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.492 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.753 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.753 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.753 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:22.015 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.015 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.015 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:22.276 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:22.539 No valid GPT data, bailing 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:22.539 00:27:22.539 Discovery Log Number of Records 2, Generation counter 2 00:27:22.539 =====Discovery Log Entry 0====== 00:27:22.539 trtype: tcp 00:27:22.539 adrfam: ipv4 00:27:22.539 subtype: current discovery subsystem 00:27:22.539 treq: not specified, sq flow control disable supported 00:27:22.539 portid: 1 00:27:22.539 trsvcid: 4420 00:27:22.539 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:22.539 traddr: 10.0.0.1 00:27:22.539 eflags: none 00:27:22.539 sectype: none 00:27:22.539 =====Discovery Log Entry 1====== 00:27:22.539 trtype: tcp 00:27:22.539 adrfam: ipv4 00:27:22.539 subtype: nvme subsystem 00:27:22.539 treq: not specified, sq flow control disable supported 00:27:22.539 portid: 1 00:27:22.539 trsvcid: 4420 00:27:22.539 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:22.539 traddr: 10.0.0.1 00:27:22.539 eflags: none 00:27:22.539 sectype: none 00:27:22.539 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:22.539 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:22.539 ===================================================== 00:27:22.539 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:22.539 ===================================================== 00:27:22.539 Controller Capabilities/Features 00:27:22.539 ================================ 00:27:22.539 Vendor ID: 0000 00:27:22.539 Subsystem Vendor ID: 0000 00:27:22.539 Serial Number: 1bfb5d37f5afd86c9375 00:27:22.539 Model Number: Linux 00:27:22.539 Firmware Version: 6.8.9-20 00:27:22.539 Recommended Arb Burst: 0 00:27:22.539 IEEE OUI Identifier: 00 00 00 00:27:22.539 Multi-path I/O 00:27:22.539 May have multiple subsystem ports: No 00:27:22.539 May have multiple controllers: No 00:27:22.539 Associated with SR-IOV VF: No 00:27:22.539 Max Data Transfer Size: Unlimited 00:27:22.539 Max Number of Namespaces: 0 00:27:22.539 Max Number of I/O Queues: 1024 00:27:22.539 NVMe Specification Version (VS): 1.3 00:27:22.539 NVMe Specification Version (Identify): 1.3 00:27:22.539 Maximum Queue Entries: 1024 00:27:22.539 Contiguous Queues Required: No 00:27:22.539 Arbitration Mechanisms Supported 00:27:22.539 Weighted Round Robin: Not Supported 00:27:22.539 Vendor Specific: Not Supported 00:27:22.539 Reset Timeout: 7500 ms 00:27:22.539 Doorbell Stride: 4 bytes 00:27:22.539 NVM Subsystem Reset: Not Supported 00:27:22.539 Command Sets Supported 00:27:22.539 NVM Command Set: Supported 00:27:22.539 Boot Partition: Not Supported 00:27:22.540 Memory Page Size Minimum: 4096 bytes 00:27:22.540 Memory Page Size Maximum: 4096 bytes 00:27:22.540 Persistent Memory Region: Not Supported 00:27:22.540 Optional Asynchronous Events Supported 00:27:22.540 Namespace Attribute Notices: Not Supported 00:27:22.540 Firmware Activation Notices: Not Supported 00:27:22.540 ANA Change Notices: Not Supported 00:27:22.540 PLE Aggregate Log Change Notices: Not Supported 00:27:22.540 LBA Status Info Alert Notices: Not Supported 00:27:22.540 EGE Aggregate Log Change Notices: Not Supported 00:27:22.540 Normal NVM Subsystem Shutdown event: Not Supported 00:27:22.540 Zone Descriptor Change Notices: Not Supported 00:27:22.540 Discovery Log Change Notices: Supported 00:27:22.540 Controller Attributes 00:27:22.540 128-bit Host Identifier: Not Supported 00:27:22.540 Non-Operational Permissive Mode: Not Supported 00:27:22.540 NVM Sets: Not Supported 00:27:22.540 Read Recovery Levels: Not Supported 00:27:22.540 Endurance Groups: Not Supported 00:27:22.540 Predictable Latency Mode: Not Supported 00:27:22.540 Traffic Based Keep ALive: Not Supported 00:27:22.540 Namespace Granularity: Not Supported 00:27:22.540 SQ Associations: Not Supported 00:27:22.540 UUID List: Not Supported 00:27:22.540 Multi-Domain Subsystem: Not Supported 00:27:22.540 Fixed Capacity Management: Not Supported 00:27:22.540 Variable Capacity Management: Not Supported 00:27:22.540 Delete Endurance Group: Not Supported 00:27:22.540 Delete NVM Set: Not Supported 00:27:22.540 Extended LBA Formats Supported: Not Supported 00:27:22.540 Flexible Data Placement Supported: Not Supported 00:27:22.540 00:27:22.540 Controller Memory Buffer Support 00:27:22.540 ================================ 00:27:22.540 Supported: No 00:27:22.540 00:27:22.540 Persistent Memory Region Support 00:27:22.540 ================================ 00:27:22.540 Supported: No 00:27:22.540 00:27:22.540 Admin Command Set Attributes 00:27:22.540 ============================ 00:27:22.540 Security Send/Receive: Not Supported 00:27:22.540 Format NVM: Not Supported 00:27:22.540 Firmware Activate/Download: Not Supported 00:27:22.540 Namespace Management: Not Supported 00:27:22.540 Device Self-Test: Not Supported 00:27:22.540 Directives: Not Supported 00:27:22.540 NVMe-MI: Not Supported 00:27:22.540 Virtualization Management: Not Supported 00:27:22.540 Doorbell Buffer Config: Not Supported 00:27:22.540 Get LBA Status Capability: Not Supported 00:27:22.540 Command & Feature Lockdown Capability: Not Supported 00:27:22.540 Abort Command Limit: 1 00:27:22.540 Async Event Request Limit: 1 00:27:22.540 Number of Firmware Slots: N/A 00:27:22.540 Firmware Slot 1 Read-Only: N/A 00:27:22.540 Firmware Activation Without Reset: N/A 00:27:22.540 Multiple Update Detection Support: N/A 00:27:22.540 Firmware Update Granularity: No Information Provided 00:27:22.540 Per-Namespace SMART Log: No 00:27:22.540 Asymmetric Namespace Access Log Page: Not Supported 00:27:22.540 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:22.540 Command Effects Log Page: Not Supported 00:27:22.540 Get Log Page Extended Data: Supported 00:27:22.540 Telemetry Log Pages: Not Supported 00:27:22.540 Persistent Event Log Pages: Not Supported 00:27:22.540 Supported Log Pages Log Page: May Support 00:27:22.540 Commands Supported & Effects Log Page: Not Supported 00:27:22.540 Feature Identifiers & Effects Log Page:May Support 00:27:22.540 NVMe-MI Commands & Effects Log Page: May Support 00:27:22.540 Data Area 4 for Telemetry Log: Not Supported 00:27:22.540 Error Log Page Entries Supported: 1 00:27:22.540 Keep Alive: Not Supported 00:27:22.540 00:27:22.540 NVM Command Set Attributes 00:27:22.540 ========================== 00:27:22.540 Submission Queue Entry Size 00:27:22.540 Max: 1 00:27:22.540 Min: 1 00:27:22.540 Completion Queue Entry Size 00:27:22.540 Max: 1 00:27:22.540 Min: 1 00:27:22.540 Number of Namespaces: 0 00:27:22.540 Compare Command: Not Supported 00:27:22.540 Write Uncorrectable Command: Not Supported 00:27:22.540 Dataset Management Command: Not Supported 00:27:22.540 Write Zeroes Command: Not Supported 00:27:22.540 Set Features Save Field: Not Supported 00:27:22.540 Reservations: Not Supported 00:27:22.540 Timestamp: Not Supported 00:27:22.540 Copy: Not Supported 00:27:22.540 Volatile Write Cache: Not Present 00:27:22.540 Atomic Write Unit (Normal): 1 00:27:22.540 Atomic Write Unit (PFail): 1 00:27:22.540 Atomic Compare & Write Unit: 1 00:27:22.540 Fused Compare & Write: Not Supported 00:27:22.540 Scatter-Gather List 00:27:22.540 SGL Command Set: Supported 00:27:22.540 SGL Keyed: Not Supported 00:27:22.540 SGL Bit Bucket Descriptor: Not Supported 00:27:22.540 SGL Metadata Pointer: Not Supported 00:27:22.540 Oversized SGL: Not Supported 00:27:22.540 SGL Metadata Address: Not Supported 00:27:22.540 SGL Offset: Supported 00:27:22.540 Transport SGL Data Block: Not Supported 00:27:22.540 Replay Protected Memory Block: Not Supported 00:27:22.540 00:27:22.540 Firmware Slot Information 00:27:22.540 ========================= 00:27:22.540 Active slot: 0 00:27:22.540 00:27:22.540 00:27:22.540 Error Log 00:27:22.540 ========= 00:27:22.540 00:27:22.540 Active Namespaces 00:27:22.540 ================= 00:27:22.540 Discovery Log Page 00:27:22.540 ================== 00:27:22.540 Generation Counter: 2 00:27:22.540 Number of Records: 2 00:27:22.540 Record Format: 0 00:27:22.540 00:27:22.540 Discovery Log Entry 0 00:27:22.540 ---------------------- 00:27:22.540 Transport Type: 3 (TCP) 00:27:22.540 Address Family: 1 (IPv4) 00:27:22.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:22.540 Entry Flags: 00:27:22.540 Duplicate Returned Information: 0 00:27:22.540 Explicit Persistent Connection Support for Discovery: 0 00:27:22.540 Transport Requirements: 00:27:22.540 Secure Channel: Not Specified 00:27:22.540 Port ID: 1 (0x0001) 00:27:22.540 Controller ID: 65535 (0xffff) 00:27:22.540 Admin Max SQ Size: 32 00:27:22.540 Transport Service Identifier: 4420 00:27:22.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:22.540 Transport Address: 10.0.0.1 00:27:22.540 Discovery Log Entry 1 00:27:22.540 ---------------------- 00:27:22.540 Transport Type: 3 (TCP) 00:27:22.540 Address Family: 1 (IPv4) 00:27:22.540 Subsystem Type: 2 (NVM Subsystem) 00:27:22.540 Entry Flags: 00:27:22.540 Duplicate Returned Information: 0 00:27:22.540 Explicit Persistent Connection Support for Discovery: 0 00:27:22.540 Transport Requirements: 00:27:22.540 Secure Channel: Not Specified 00:27:22.540 Port ID: 1 (0x0001) 00:27:22.540 Controller ID: 65535 (0xffff) 00:27:22.540 Admin Max SQ Size: 32 00:27:22.540 Transport Service Identifier: 4420 00:27:22.540 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:22.540 Transport Address: 10.0.0.1 00:27:22.540 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:22.804 get_feature(0x01) failed 00:27:22.804 get_feature(0x02) failed 00:27:22.804 get_feature(0x04) failed 00:27:22.804 ===================================================== 00:27:22.804 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:22.804 ===================================================== 00:27:22.804 Controller Capabilities/Features 00:27:22.804 ================================ 00:27:22.804 Vendor ID: 0000 00:27:22.804 Subsystem Vendor ID: 0000 00:27:22.804 Serial Number: e007ea657558dfbaf124 00:27:22.804 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:22.804 Firmware Version: 6.8.9-20 00:27:22.804 Recommended Arb Burst: 6 00:27:22.804 IEEE OUI Identifier: 00 00 00 00:27:22.804 Multi-path I/O 00:27:22.804 May have multiple subsystem ports: Yes 00:27:22.804 May have multiple controllers: Yes 00:27:22.804 Associated with SR-IOV VF: No 00:27:22.804 Max Data Transfer Size: Unlimited 00:27:22.804 Max Number of Namespaces: 1024 00:27:22.804 Max Number of I/O Queues: 128 00:27:22.804 NVMe Specification Version (VS): 1.3 00:27:22.804 NVMe Specification Version (Identify): 1.3 00:27:22.804 Maximum Queue Entries: 1024 00:27:22.804 Contiguous Queues Required: No 00:27:22.804 Arbitration Mechanisms Supported 00:27:22.804 Weighted Round Robin: Not Supported 00:27:22.804 Vendor Specific: Not Supported 00:27:22.804 Reset Timeout: 7500 ms 00:27:22.804 Doorbell Stride: 4 bytes 00:27:22.804 NVM Subsystem Reset: Not Supported 00:27:22.804 Command Sets Supported 00:27:22.804 NVM Command Set: Supported 00:27:22.804 Boot Partition: Not Supported 00:27:22.804 Memory Page Size Minimum: 4096 bytes 00:27:22.804 Memory Page Size Maximum: 4096 bytes 00:27:22.804 Persistent Memory Region: Not Supported 00:27:22.804 Optional Asynchronous Events Supported 00:27:22.804 Namespace Attribute Notices: Supported 00:27:22.804 Firmware Activation Notices: Not Supported 00:27:22.804 ANA Change Notices: Supported 00:27:22.804 PLE Aggregate Log Change Notices: Not Supported 00:27:22.804 LBA Status Info Alert Notices: Not Supported 00:27:22.804 EGE Aggregate Log Change Notices: Not Supported 00:27:22.804 Normal NVM Subsystem Shutdown event: Not Supported 00:27:22.804 Zone Descriptor Change Notices: Not Supported 00:27:22.804 Discovery Log Change Notices: Not Supported 00:27:22.804 Controller Attributes 00:27:22.804 128-bit Host Identifier: Supported 00:27:22.804 Non-Operational Permissive Mode: Not Supported 00:27:22.804 NVM Sets: Not Supported 00:27:22.804 Read Recovery Levels: Not Supported 00:27:22.804 Endurance Groups: Not Supported 00:27:22.804 Predictable Latency Mode: Not Supported 00:27:22.804 Traffic Based Keep ALive: Supported 00:27:22.804 Namespace Granularity: Not Supported 00:27:22.804 SQ Associations: Not Supported 00:27:22.804 UUID List: Not Supported 00:27:22.804 Multi-Domain Subsystem: Not Supported 00:27:22.804 Fixed Capacity Management: Not Supported 00:27:22.804 Variable Capacity Management: Not Supported 00:27:22.804 Delete Endurance Group: Not Supported 00:27:22.804 Delete NVM Set: Not Supported 00:27:22.804 Extended LBA Formats Supported: Not Supported 00:27:22.804 Flexible Data Placement Supported: Not Supported 00:27:22.804 00:27:22.804 Controller Memory Buffer Support 00:27:22.804 ================================ 00:27:22.804 Supported: No 00:27:22.804 00:27:22.804 Persistent Memory Region Support 00:27:22.804 ================================ 00:27:22.804 Supported: No 00:27:22.804 00:27:22.804 Admin Command Set Attributes 00:27:22.804 ============================ 00:27:22.804 Security Send/Receive: Not Supported 00:27:22.804 Format NVM: Not Supported 00:27:22.804 Firmware Activate/Download: Not Supported 00:27:22.804 Namespace Management: Not Supported 00:27:22.804 Device Self-Test: Not Supported 00:27:22.804 Directives: Not Supported 00:27:22.804 NVMe-MI: Not Supported 00:27:22.804 Virtualization Management: Not Supported 00:27:22.804 Doorbell Buffer Config: Not Supported 00:27:22.804 Get LBA Status Capability: Not Supported 00:27:22.804 Command & Feature Lockdown Capability: Not Supported 00:27:22.804 Abort Command Limit: 4 00:27:22.804 Async Event Request Limit: 4 00:27:22.804 Number of Firmware Slots: N/A 00:27:22.804 Firmware Slot 1 Read-Only: N/A 00:27:22.804 Firmware Activation Without Reset: N/A 00:27:22.804 Multiple Update Detection Support: N/A 00:27:22.804 Firmware Update Granularity: No Information Provided 00:27:22.804 Per-Namespace SMART Log: Yes 00:27:22.804 Asymmetric Namespace Access Log Page: Supported 00:27:22.804 ANA Transition Time : 10 sec 00:27:22.804 00:27:22.804 Asymmetric Namespace Access Capabilities 00:27:22.804 ANA Optimized State : Supported 00:27:22.804 ANA Non-Optimized State : Supported 00:27:22.804 ANA Inaccessible State : Supported 00:27:22.804 ANA Persistent Loss State : Supported 00:27:22.804 ANA Change State : Supported 00:27:22.804 ANAGRPID is not changed : No 00:27:22.804 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:22.804 00:27:22.804 ANA Group Identifier Maximum : 128 00:27:22.804 Number of ANA Group Identifiers : 128 00:27:22.804 Max Number of Allowed Namespaces : 1024 00:27:22.804 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:22.804 Command Effects Log Page: Supported 00:27:22.804 Get Log Page Extended Data: Supported 00:27:22.804 Telemetry Log Pages: Not Supported 00:27:22.804 Persistent Event Log Pages: Not Supported 00:27:22.804 Supported Log Pages Log Page: May Support 00:27:22.804 Commands Supported & Effects Log Page: Not Supported 00:27:22.804 Feature Identifiers & Effects Log Page:May Support 00:27:22.804 NVMe-MI Commands & Effects Log Page: May Support 00:27:22.804 Data Area 4 for Telemetry Log: Not Supported 00:27:22.804 Error Log Page Entries Supported: 128 00:27:22.804 Keep Alive: Supported 00:27:22.804 Keep Alive Granularity: 1000 ms 00:27:22.804 00:27:22.804 NVM Command Set Attributes 00:27:22.804 ========================== 00:27:22.804 Submission Queue Entry Size 00:27:22.804 Max: 64 00:27:22.804 Min: 64 00:27:22.804 Completion Queue Entry Size 00:27:22.804 Max: 16 00:27:22.804 Min: 16 00:27:22.804 Number of Namespaces: 1024 00:27:22.804 Compare Command: Not Supported 00:27:22.804 Write Uncorrectable Command: Not Supported 00:27:22.804 Dataset Management Command: Supported 00:27:22.804 Write Zeroes Command: Supported 00:27:22.804 Set Features Save Field: Not Supported 00:27:22.804 Reservations: Not Supported 00:27:22.804 Timestamp: Not Supported 00:27:22.804 Copy: Not Supported 00:27:22.804 Volatile Write Cache: Present 00:27:22.804 Atomic Write Unit (Normal): 1 00:27:22.804 Atomic Write Unit (PFail): 1 00:27:22.804 Atomic Compare & Write Unit: 1 00:27:22.804 Fused Compare & Write: Not Supported 00:27:22.804 Scatter-Gather List 00:27:22.804 SGL Command Set: Supported 00:27:22.804 SGL Keyed: Not Supported 00:27:22.804 SGL Bit Bucket Descriptor: Not Supported 00:27:22.804 SGL Metadata Pointer: Not Supported 00:27:22.804 Oversized SGL: Not Supported 00:27:22.804 SGL Metadata Address: Not Supported 00:27:22.804 SGL Offset: Supported 00:27:22.804 Transport SGL Data Block: Not Supported 00:27:22.804 Replay Protected Memory Block: Not Supported 00:27:22.804 00:27:22.804 Firmware Slot Information 00:27:22.804 ========================= 00:27:22.804 Active slot: 0 00:27:22.804 00:27:22.804 Asymmetric Namespace Access 00:27:22.804 =========================== 00:27:22.804 Change Count : 0 00:27:22.804 Number of ANA Group Descriptors : 1 00:27:22.804 ANA Group Descriptor : 0 00:27:22.804 ANA Group ID : 1 00:27:22.804 Number of NSID Values : 1 00:27:22.804 Change Count : 0 00:27:22.804 ANA State : 1 00:27:22.804 Namespace Identifier : 1 00:27:22.804 00:27:22.804 Commands Supported and Effects 00:27:22.804 ============================== 00:27:22.804 Admin Commands 00:27:22.804 -------------- 00:27:22.804 Get Log Page (02h): Supported 00:27:22.804 Identify (06h): Supported 00:27:22.804 Abort (08h): Supported 00:27:22.804 Set Features (09h): Supported 00:27:22.804 Get Features (0Ah): Supported 00:27:22.804 Asynchronous Event Request (0Ch): Supported 00:27:22.804 Keep Alive (18h): Supported 00:27:22.804 I/O Commands 00:27:22.804 ------------ 00:27:22.805 Flush (00h): Supported 00:27:22.805 Write (01h): Supported LBA-Change 00:27:22.805 Read (02h): Supported 00:27:22.805 Write Zeroes (08h): Supported LBA-Change 00:27:22.805 Dataset Management (09h): Supported 00:27:22.805 00:27:22.805 Error Log 00:27:22.805 ========= 00:27:22.805 Entry: 0 00:27:22.805 Error Count: 0x3 00:27:22.805 Submission Queue Id: 0x0 00:27:22.805 Command Id: 0x5 00:27:22.805 Phase Bit: 0 00:27:22.805 Status Code: 0x2 00:27:22.805 Status Code Type: 0x0 00:27:22.805 Do Not Retry: 1 00:27:22.805 Error Location: 0x28 00:27:22.805 LBA: 0x0 00:27:22.805 Namespace: 0x0 00:27:22.805 Vendor Log Page: 0x0 00:27:22.805 ----------- 00:27:22.805 Entry: 1 00:27:22.805 Error Count: 0x2 00:27:22.805 Submission Queue Id: 0x0 00:27:22.805 Command Id: 0x5 00:27:22.805 Phase Bit: 0 00:27:22.805 Status Code: 0x2 00:27:22.805 Status Code Type: 0x0 00:27:22.805 Do Not Retry: 1 00:27:22.805 Error Location: 0x28 00:27:22.805 LBA: 0x0 00:27:22.805 Namespace: 0x0 00:27:22.805 Vendor Log Page: 0x0 00:27:22.805 ----------- 00:27:22.805 Entry: 2 00:27:22.805 Error Count: 0x1 00:27:22.805 Submission Queue Id: 0x0 00:27:22.805 Command Id: 0x4 00:27:22.805 Phase Bit: 0 00:27:22.805 Status Code: 0x2 00:27:22.805 Status Code Type: 0x0 00:27:22.805 Do Not Retry: 1 00:27:22.805 Error Location: 0x28 00:27:22.805 LBA: 0x0 00:27:22.805 Namespace: 0x0 00:27:22.805 Vendor Log Page: 0x0 00:27:22.805 00:27:22.805 Number of Queues 00:27:22.805 ================ 00:27:22.805 Number of I/O Submission Queues: 128 00:27:22.805 Number of I/O Completion Queues: 128 00:27:22.805 00:27:22.805 ZNS Specific Controller Data 00:27:22.805 ============================ 00:27:22.805 Zone Append Size Limit: 0 00:27:22.805 00:27:22.805 00:27:22.805 Active Namespaces 00:27:22.805 ================= 00:27:22.805 get_feature(0x05) failed 00:27:22.805 Namespace ID:1 00:27:22.805 Command Set Identifier: NVM (00h) 00:27:22.805 Deallocate: Supported 00:27:22.805 Deallocated/Unwritten Error: Not Supported 00:27:22.805 Deallocated Read Value: Unknown 00:27:22.805 Deallocate in Write Zeroes: Not Supported 00:27:22.805 Deallocated Guard Field: 0xFFFF 00:27:22.805 Flush: Supported 00:27:22.805 Reservation: Not Supported 00:27:22.805 Namespace Sharing Capabilities: Multiple Controllers 00:27:22.805 Size (in LBAs): 3750748848 (1788GiB) 00:27:22.805 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:22.805 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:22.805 UUID: e7a5a770-e32c-45f6-bc2a-fa037a29ecca 00:27:22.805 Thin Provisioning: Not Supported 00:27:22.805 Per-NS Atomic Units: Yes 00:27:22.805 Atomic Write Unit (Normal): 8 00:27:22.805 Atomic Write Unit (PFail): 8 00:27:22.805 Preferred Write Granularity: 8 00:27:22.805 Atomic Compare & Write Unit: 8 00:27:22.805 Atomic Boundary Size (Normal): 0 00:27:22.805 Atomic Boundary Size (PFail): 0 00:27:22.805 Atomic Boundary Offset: 0 00:27:22.805 NGUID/EUI64 Never Reused: No 00:27:22.805 ANA group ID: 1 00:27:22.805 Namespace Write Protected: No 00:27:22.805 Number of LBA Formats: 1 00:27:22.805 Current LBA Format: LBA Format #00 00:27:22.805 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:22.805 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.805 rmmod nvme_tcp 00:27:22.805 rmmod nvme_fabrics 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.805 20:05:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:25.353 20:05:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:28.658 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:28.658 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:29.231 00:27:29.231 real 0m19.723s 00:27:29.231 user 0m5.296s 00:27:29.231 sys 0m11.409s 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.231 ************************************ 00:27:29.231 END TEST nvmf_identify_kernel_target 00:27:29.231 ************************************ 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.231 ************************************ 00:27:29.231 START TEST nvmf_auth_host 00:27:29.231 ************************************ 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.231 * Looking for test storage... 00:27:29.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.231 20:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.494 --rc genhtml_branch_coverage=1 00:27:29.494 --rc genhtml_function_coverage=1 00:27:29.494 --rc genhtml_legend=1 00:27:29.494 --rc geninfo_all_blocks=1 00:27:29.494 --rc geninfo_unexecuted_blocks=1 00:27:29.494 00:27:29.494 ' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.494 --rc genhtml_branch_coverage=1 00:27:29.494 --rc genhtml_function_coverage=1 00:27:29.494 --rc genhtml_legend=1 00:27:29.494 --rc geninfo_all_blocks=1 00:27:29.494 --rc geninfo_unexecuted_blocks=1 00:27:29.494 00:27:29.494 ' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.494 --rc genhtml_branch_coverage=1 00:27:29.494 --rc genhtml_function_coverage=1 00:27:29.494 --rc genhtml_legend=1 00:27:29.494 --rc geninfo_all_blocks=1 00:27:29.494 --rc geninfo_unexecuted_blocks=1 00:27:29.494 00:27:29.494 ' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.494 --rc genhtml_branch_coverage=1 00:27:29.494 --rc genhtml_function_coverage=1 00:27:29.494 --rc genhtml_legend=1 00:27:29.494 --rc geninfo_all_blocks=1 00:27:29.494 --rc geninfo_unexecuted_blocks=1 00:27:29.494 00:27:29.494 ' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.494 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.495 20:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.637 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:37.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:37.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:37.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:37.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:37.638 00:27:37.638 --- 10.0.0.2 ping statistics --- 00:27:37.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.638 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:27:37.638 00:27:37.638 --- 10.0.0.1 ping statistics --- 00:27:37.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.638 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3811861 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3811861 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3811861 ']' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.638 20:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4d2e50593c19082fc07a8edd6bf7dfb 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uCE 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4d2e50593c19082fc07a8edd6bf7dfb 0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4d2e50593c19082fc07a8edd6bf7dfb 0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4d2e50593c19082fc07a8edd6bf7dfb 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uCE 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uCE 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uCE 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee940199286720a48b0e42d9b05714d499d508d0e4776f92bb81de01b0a5be50 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4xe 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee940199286720a48b0e42d9b05714d499d508d0e4776f92bb81de01b0a5be50 3 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee940199286720a48b0e42d9b05714d499d508d0e4776f92bb81de01b0a5be50 3 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee940199286720a48b0e42d9b05714d499d508d0e4776f92bb81de01b0a5be50 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4xe 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4xe 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4xe 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=da11841df5a334945a9b16c30464d9669201ddbafc9e1b5b 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xJS 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key da11841df5a334945a9b16c30464d9669201ddbafc9e1b5b 0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 da11841df5a334945a9b16c30464d9669201ddbafc9e1b5b 0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=da11841df5a334945a9b16c30464d9669201ddbafc9e1b5b 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:37.901 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xJS 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xJS 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xJS 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3616d9e8370bc7041e6f11ffe1bf9060e01a7a115e2a260 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1ig 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3616d9e8370bc7041e6f11ffe1bf9060e01a7a115e2a260 2 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3616d9e8370bc7041e6f11ffe1bf9060e01a7a115e2a260 2 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3616d9e8370bc7041e6f11ffe1bf9060e01a7a115e2a260 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1ig 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1ig 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1ig 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.162 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=538838a6f3c7d3043cec1a381d0eb6d9 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.92L 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 538838a6f3c7d3043cec1a381d0eb6d9 1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 538838a6f3c7d3043cec1a381d0eb6d9 1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=538838a6f3c7d3043cec1a381d0eb6d9 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.92L 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.92L 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.92L 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26d6aca341430060504bfd7865b27976 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WHm 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26d6aca341430060504bfd7865b27976 1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26d6aca341430060504bfd7865b27976 1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26d6aca341430060504bfd7865b27976 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WHm 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WHm 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WHm 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c7b8ee4466f344096a6a13b6e31924dd138496633d191ca 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Gk8 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c7b8ee4466f344096a6a13b6e31924dd138496633d191ca 2 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c7b8ee4466f344096a6a13b6e31924dd138496633d191ca 2 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c7b8ee4466f344096a6a13b6e31924dd138496633d191ca 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:38.163 20:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Gk8 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Gk8 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Gk8 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=385296f2108b7cab80115d6ecf3c46e7 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XVd 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 385296f2108b7cab80115d6ecf3c46e7 0 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 385296f2108b7cab80115d6ecf3c46e7 0 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=385296f2108b7cab80115d6ecf3c46e7 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XVd 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XVd 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XVd 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f80d1186bcc885a7a1afa1729d609d75aca044e8ebf05eac2d66c4617d9c7dec 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.evB 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f80d1186bcc885a7a1afa1729d609d75aca044e8ebf05eac2d66c4617d9c7dec 3 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f80d1186bcc885a7a1afa1729d609d75aca044e8ebf05eac2d66c4617d9c7dec 3 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f80d1186bcc885a7a1afa1729d609d75aca044e8ebf05eac2d66c4617d9c7dec 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.evB 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.evB 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.evB 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3811861 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3811861 ']' 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.424 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uCE 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4xe ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4xe 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xJS 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1ig ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1ig 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.92L 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WHm ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WHm 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Gk8 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XVd ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XVd 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.evB 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:38.685 20:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.892 Waiting for block devices as requested 00:27:42.892 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:42.892 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.152 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.152 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.152 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.152 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.413 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.413 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.413 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.413 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:44.459 No valid GPT data, bailing 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:44.459 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:44.720 00:27:44.720 Discovery Log Number of Records 2, Generation counter 2 00:27:44.720 =====Discovery Log Entry 0====== 00:27:44.720 trtype: tcp 00:27:44.720 adrfam: ipv4 00:27:44.720 subtype: current discovery subsystem 00:27:44.720 treq: not specified, sq flow control disable supported 00:27:44.720 portid: 1 00:27:44.720 trsvcid: 4420 00:27:44.720 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:44.720 traddr: 10.0.0.1 00:27:44.720 eflags: none 00:27:44.720 sectype: none 00:27:44.720 =====Discovery Log Entry 1====== 00:27:44.720 trtype: tcp 00:27:44.720 adrfam: ipv4 00:27:44.720 subtype: nvme subsystem 00:27:44.720 treq: not specified, sq flow control disable supported 00:27:44.720 portid: 1 00:27:44.720 trsvcid: 4420 00:27:44.720 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:44.720 traddr: 10.0.0.1 00:27:44.720 eflags: none 00:27:44.720 sectype: none 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.720 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.721 nvme0n1 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.721 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 nvme0n1 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:44.982 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.983 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.244 nvme0n1 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.244 20:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.244 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.506 nvme0n1 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.506 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.767 nvme0n1 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.767 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.030 nvme0n1 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:46.030 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.031 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.293 nvme0n1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.293 20:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.555 nvme0n1 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.555 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.816 nvme0n1 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.816 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.817 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.077 nvme0n1 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.077 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.078 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.339 nvme0n1 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.339 20:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.339 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.601 nvme0n1 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.601 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.602 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.863 nvme0n1 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.863 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:48.123 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.124 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.385 nvme0n1 00:27:48.385 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.385 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.385 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.385 20:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.385 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.386 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.386 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.646 nvme0n1 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.646 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.905 nvme0n1 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.905 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.166 20:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.428 nvme0n1 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.428 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.429 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.429 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:49.429 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:49.429 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.691 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.954 nvme0n1 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.954 20:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 nvme0n1 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.526 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.099 nvme0n1 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.099 20:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.360 nvme0n1 00:27:51.360 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.360 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.360 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.360 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.360 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.621 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.622 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.622 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.622 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 nvme0n1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 20:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 nvme0n1 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.135 20:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.707 nvme0n1 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.707 20:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.278 nvme0n1 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.278 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.539 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.540 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.111 nvme0n1 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.111 20:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.373 nvme0n1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.373 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.374 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.374 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.634 nvme0n1 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.634 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.895 nvme0n1 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.895 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.156 nvme0n1 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.156 nvme0n1 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.156 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.417 20:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.417 nvme0n1 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.417 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.678 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.679 nvme0n1 00:27:56.679 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.940 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.202 nvme0n1 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.202 20:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.464 nvme0n1 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.464 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.465 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 nvme0n1 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.727 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.728 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.989 nvme0n1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.989 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.251 nvme0n1 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.251 20:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.251 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.512 nvme0n1 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.512 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.772 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.032 nvme0n1 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.032 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.033 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.033 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.033 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.033 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.033 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 nvme0n1 00:27:59.293 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.293 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.293 20:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:27:59.293 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.294 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.866 nvme0n1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.866 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.436 nvme0n1 00:28:00.436 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.436 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.436 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.437 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.437 20:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.437 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.697 nvme0n1 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.697 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:00.958 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.959 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.219 nvme0n1 00:28:01.219 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.219 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.219 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.219 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.219 20:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.219 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.480 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.741 nvme0n1 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.741 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.002 20:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.575 nvme0n1 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:02.575 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.576 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.148 nvme0n1 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.148 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.408 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.408 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.408 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.408 20:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.408 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 nvme0n1 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.980 20:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 nvme0n1 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.927 20:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.499 nvme0n1 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.499 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.761 nvme0n1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.761 nvme0n1 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.761 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:06.023 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.024 nvme0n1 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.024 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.285 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.286 20:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.286 nvme0n1 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.286 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.548 nvme0n1 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.548 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.549 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 nvme0n1 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.810 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.811 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.811 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.811 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.811 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.811 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 nvme0n1 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.072 20:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.333 nvme0n1 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.333 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.594 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.595 nvme0n1 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.595 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 nvme0n1 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.118 nvme0n1 00:28:08.118 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:08.380 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:08.381 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:08.381 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.381 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.381 20:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.381 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 nvme0n1 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.902 nvme0n1 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.902 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.903 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.163 nvme0n1 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.163 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.425 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.425 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.425 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.425 20:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.425 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 nvme0n1 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.687 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.688 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.688 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.688 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.258 nvme0n1 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:10.258 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.259 20:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.521 nvme0n1 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.521 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.782 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.783 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.043 nvme0n1 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.043 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.305 20:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.567 nvme0n1 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.567 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.568 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.140 nvme0n1 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTRkMmU1MDU5M2MxOTA4MmZjMDdhOGVkZDZiZjdkZmI8YpAY: 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWU5NDAxOTkyODY3MjBhNDhiMGU0MmQ5YjA1NzE0ZDQ5OWQ1MDhkMGU0Nzc2ZjkyYmI4MWRlMDFiMGE1YmU1MLJRxWI=: 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.140 20:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 nvme0n1 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.774 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.064 20:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.636 nvme0n1 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.636 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.207 nvme0n1 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.207 20:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.207 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.207 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.207 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.207 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3YjhlZTQ0NjZmMzQ0MDk2YTZhMTNiNmUzMTkyNGRkMTM4NDk2NjMzZDE5MWNhWQoPaw==: 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg1Mjk2ZjIxMDhiN2NhYjgwMTE1ZDZlY2YzYzQ2ZTe0oLwY: 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.468 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.041 nvme0n1 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjgwZDExODZiY2M4ODVhN2ExYWZhMTcyOWQ2MDlkNzVhY2EwNDRlOGViZjA1ZWFjMmQ2NmM0NjE3ZDljN2RlYwRI0NI=: 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.041 20:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.611 nvme0n1 00:28:15.611 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.611 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.611 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.611 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.611 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 request: 00:28:15.873 { 00:28:15.873 "name": "nvme0", 00:28:15.873 "trtype": "tcp", 00:28:15.873 "traddr": "10.0.0.1", 00:28:15.873 "adrfam": "ipv4", 00:28:15.873 "trsvcid": "4420", 00:28:15.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:15.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:15.873 "prchk_reftag": false, 00:28:15.873 "prchk_guard": false, 00:28:15.873 "hdgst": false, 00:28:15.873 "ddgst": false, 00:28:15.873 "allow_unrecognized_csi": false, 00:28:15.873 "method": "bdev_nvme_attach_controller", 00:28:15.873 "req_id": 1 00:28:15.873 } 00:28:15.873 Got JSON-RPC error response 00:28:15.873 response: 00:28:15.873 { 00:28:15.873 "code": -5, 00:28:15.873 "message": "Input/output error" 00:28:15.873 } 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.873 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.134 request: 00:28:16.134 { 00:28:16.134 "name": "nvme0", 00:28:16.134 "trtype": "tcp", 00:28:16.134 "traddr": "10.0.0.1", 00:28:16.134 "adrfam": "ipv4", 00:28:16.134 "trsvcid": "4420", 00:28:16.134 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:16.134 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:16.134 "prchk_reftag": false, 00:28:16.134 "prchk_guard": false, 00:28:16.134 "hdgst": false, 00:28:16.134 "ddgst": false, 00:28:16.134 "dhchap_key": "key2", 00:28:16.134 "allow_unrecognized_csi": false, 00:28:16.134 "method": "bdev_nvme_attach_controller", 00:28:16.134 "req_id": 1 00:28:16.134 } 00:28:16.134 Got JSON-RPC error response 00:28:16.134 response: 00:28:16.134 { 00:28:16.134 "code": -5, 00:28:16.134 "message": "Input/output error" 00:28:16.134 } 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.134 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.135 request: 00:28:16.135 { 00:28:16.135 "name": "nvme0", 00:28:16.135 "trtype": "tcp", 00:28:16.135 "traddr": "10.0.0.1", 00:28:16.135 "adrfam": "ipv4", 00:28:16.135 "trsvcid": "4420", 00:28:16.135 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:16.135 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:16.135 "prchk_reftag": false, 00:28:16.135 "prchk_guard": false, 00:28:16.135 "hdgst": false, 00:28:16.135 "ddgst": false, 00:28:16.135 "dhchap_key": "key1", 00:28:16.135 "dhchap_ctrlr_key": "ckey2", 00:28:16.135 "allow_unrecognized_csi": false, 00:28:16.135 "method": "bdev_nvme_attach_controller", 00:28:16.135 "req_id": 1 00:28:16.135 } 00:28:16.135 Got JSON-RPC error response 00:28:16.135 response: 00:28:16.135 { 00:28:16.135 "code": -5, 00:28:16.135 "message": "Input/output error" 00:28:16.135 } 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.135 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.499 nvme0n1 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.499 20:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.499 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.499 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.499 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:16.499 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.499 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 request: 00:28:16.500 { 00:28:16.500 "name": "nvme0", 00:28:16.500 "dhchap_key": "key1", 00:28:16.500 "dhchap_ctrlr_key": "ckey2", 00:28:16.500 "method": "bdev_nvme_set_keys", 00:28:16.500 "req_id": 1 00:28:16.500 } 00:28:16.500 Got JSON-RPC error response 00:28:16.500 response: 00:28:16.500 { 00:28:16.500 "code": -13, 00:28:16.500 "message": "Permission denied" 00:28:16.500 } 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:16.500 20:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:17.441 20:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGExMTg0MWRmNWEzMzQ5NDVhOWIxNmMzMDQ2NGQ5NjY5MjAxZGRiYWZjOWUxYjViCrRonQ==: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTM2MTZkOWU4MzcwYmM3MDQxZTZmMTFmZmUxYmY5MDYwZTAxYTdhMTE1ZTJhMjYw3neqtw==: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.828 nvme0n1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTM4ODM4YTZmM2M3ZDMwNDNjZWMxYTM4MWQwZWI2ZDm4IAma: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjZkNmFjYTM0MTQzMDA2MDUwNGJmZDc4NjViMjc5NzbJHEM6: 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.828 request: 00:28:18.828 { 00:28:18.828 "name": "nvme0", 00:28:18.828 "dhchap_key": "key2", 00:28:18.828 "dhchap_ctrlr_key": "ckey1", 00:28:18.828 "method": "bdev_nvme_set_keys", 00:28:18.828 "req_id": 1 00:28:18.828 } 00:28:18.828 Got JSON-RPC error response 00:28:18.828 response: 00:28:18.828 { 00:28:18.828 "code": -13, 00:28:18.828 "message": "Permission denied" 00:28:18.828 } 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:18.828 20:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:19.771 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.771 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:19.771 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.771 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.032 rmmod nvme_tcp 00:28:20.032 rmmod nvme_fabrics 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3811861 ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3811861 ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3811861' 00:28:20.032 killing process with pid 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3811861 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.032 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.293 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.293 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.293 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.293 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.293 20:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.205 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:22.206 20:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:22.206 20:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:26.414 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.414 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.414 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.414 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.414 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.414 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.415 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:26.415 20:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uCE /tmp/spdk.key-null.xJS /tmp/spdk.key-sha256.92L /tmp/spdk.key-sha384.Gk8 /tmp/spdk.key-sha512.evB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:26.415 20:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:29.712 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:29.712 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:29.971 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:29.971 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:29.971 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:29.972 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:29.972 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:30.231 00:28:30.231 real 1m1.063s 00:28:30.231 user 0m54.798s 00:28:30.231 sys 0m16.173s 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.231 ************************************ 00:28:30.231 END TEST nvmf_auth_host 00:28:30.231 ************************************ 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.231 20:06:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.231 ************************************ 00:28:30.231 START TEST nvmf_digest 00:28:30.231 ************************************ 00:28:30.232 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.492 * Looking for test storage... 00:28:30.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:30.492 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.493 --rc genhtml_branch_coverage=1 00:28:30.493 --rc genhtml_function_coverage=1 00:28:30.493 --rc genhtml_legend=1 00:28:30.493 --rc geninfo_all_blocks=1 00:28:30.493 --rc geninfo_unexecuted_blocks=1 00:28:30.493 00:28:30.493 ' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.493 --rc genhtml_branch_coverage=1 00:28:30.493 --rc genhtml_function_coverage=1 00:28:30.493 --rc genhtml_legend=1 00:28:30.493 --rc geninfo_all_blocks=1 00:28:30.493 --rc geninfo_unexecuted_blocks=1 00:28:30.493 00:28:30.493 ' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.493 --rc genhtml_branch_coverage=1 00:28:30.493 --rc genhtml_function_coverage=1 00:28:30.493 --rc genhtml_legend=1 00:28:30.493 --rc geninfo_all_blocks=1 00:28:30.493 --rc geninfo_unexecuted_blocks=1 00:28:30.493 00:28:30.493 ' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.493 --rc genhtml_branch_coverage=1 00:28:30.493 --rc genhtml_function_coverage=1 00:28:30.493 --rc genhtml_legend=1 00:28:30.493 --rc geninfo_all_blocks=1 00:28:30.493 --rc geninfo_unexecuted_blocks=1 00:28:30.493 00:28:30.493 ' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.493 20:06:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:38.630 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:38.630 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:38.630 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:38.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.630 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:28:38.631 00:28:38.631 --- 10.0.0.2 ping statistics --- 00:28:38.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.631 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:28:38.631 00:28:38.631 --- 10.0.0.1 ping statistics --- 00:28:38.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.631 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 ************************************ 00:28:38.631 START TEST nvmf_digest_clean 00:28:38.631 ************************************ 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3829425 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3829425 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3829425 ']' 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.631 20:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.631 [2024-11-26 20:06:38.897682] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:38.631 [2024-11-26 20:06:38.897788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.631 [2024-11-26 20:06:39.001594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.631 [2024-11-26 20:06:39.052441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.631 [2024-11-26 20:06:39.052496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.631 [2024-11-26 20:06:39.052505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.631 [2024-11-26 20:06:39.052512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.631 [2024-11-26 20:06:39.052518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.631 [2024-11-26 20:06:39.053301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.890 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.890 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:38.890 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:38.890 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.890 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.152 null0 00:28:39.152 [2024-11-26 20:06:39.846556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.152 [2024-11-26 20:06:39.870831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3829768 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3829768 /var/tmp/bperf.sock 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3829768 ']' 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.152 20:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.152 [2024-11-26 20:06:39.931499] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:39.152 [2024-11-26 20:06:39.931566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829768 ] 00:28:39.414 [2024-11-26 20:06:40.025736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.414 [2024-11-26 20:06:40.083220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.988 20:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.988 20:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:39.988 20:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:39.988 20:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.988 20:06:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.249 20:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.250 20:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.822 nvme0n1 00:28:40.822 20:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.822 20:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.822 Running I/O for 2 seconds... 00:28:43.153 18647.00 IOPS, 72.84 MiB/s [2024-11-26T19:06:43.974Z] 20849.50 IOPS, 81.44 MiB/s 00:28:43.153 Latency(us) 00:28:43.153 [2024-11-26T19:06:43.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.153 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:43.153 nvme0n1 : 2.00 20863.98 81.50 0.00 0.00 6128.11 2689.71 20862.29 00:28:43.153 [2024-11-26T19:06:43.974Z] =================================================================================================================== 00:28:43.153 [2024-11-26T19:06:43.974Z] Total : 20863.98 81.50 0.00 0.00 6128.11 2689.71 20862.29 00:28:43.153 { 00:28:43.153 "results": [ 00:28:43.153 { 00:28:43.153 "job": "nvme0n1", 00:28:43.153 "core_mask": "0x2", 00:28:43.153 "workload": "randread", 00:28:43.153 "status": "finished", 00:28:43.153 "queue_depth": 128, 00:28:43.153 "io_size": 4096, 00:28:43.153 "runtime": 2.003213, 00:28:43.153 "iops": 20863.982012896282, 00:28:43.153 "mibps": 81.4999297378761, 00:28:43.153 "io_failed": 0, 00:28:43.153 "io_timeout": 0, 00:28:43.153 "avg_latency_us": 6128.114504286797, 00:28:43.153 "min_latency_us": 2689.7066666666665, 00:28:43.153 "max_latency_us": 20862.293333333335 00:28:43.153 } 00:28:43.153 ], 00:28:43.153 "core_count": 1 00:28:43.153 } 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:43.153 | select(.opcode=="crc32c") 00:28:43.153 | "\(.module_name) \(.executed)"' 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3829768 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3829768 ']' 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3829768 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3829768 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3829768' 00:28:43.153 killing process with pid 3829768 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3829768 00:28:43.153 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.153 00:28:43.153 Latency(us) 00:28:43.153 [2024-11-26T19:06:43.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.153 [2024-11-26T19:06:43.974Z] =================================================================================================================== 00:28:43.153 [2024-11-26T19:06:43.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3829768 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3830452 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3830452 /var/tmp/bperf.sock 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3830452 ']' 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.153 20:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 [2024-11-26 20:06:44.005758] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:43.415 [2024-11-26 20:06:44.005814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830452 ] 00:28:43.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.415 Zero copy mechanism will not be used. 00:28:43.415 [2024-11-26 20:06:44.090198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.415 [2024-11-26 20:06:44.117539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.357 20:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.357 20:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:44.357 20:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:44.357 20:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:44.357 20:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.357 20:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.357 20:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.617 nvme0n1 00:28:44.617 20:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:44.617 20:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.617 Zero copy mechanism will not be used. 00:28:44.617 Running I/O for 2 seconds... 00:28:46.942 3392.00 IOPS, 424.00 MiB/s [2024-11-26T19:06:47.763Z] 3444.00 IOPS, 430.50 MiB/s 00:28:46.942 Latency(us) 00:28:46.942 [2024-11-26T19:06:47.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.942 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.942 nvme0n1 : 2.01 3441.77 430.22 0.00 0.00 4645.51 515.41 11304.96 00:28:46.942 [2024-11-26T19:06:47.763Z] =================================================================================================================== 00:28:46.942 [2024-11-26T19:06:47.763Z] Total : 3441.77 430.22 0.00 0.00 4645.51 515.41 11304.96 00:28:46.942 { 00:28:46.942 "results": [ 00:28:46.942 { 00:28:46.942 "job": "nvme0n1", 00:28:46.942 "core_mask": "0x2", 00:28:46.942 "workload": "randread", 00:28:46.942 "status": "finished", 00:28:46.942 "queue_depth": 16, 00:28:46.942 "io_size": 131072, 00:28:46.942 "runtime": 2.005942, 00:28:46.942 "iops": 3441.7744879961633, 00:28:46.942 "mibps": 430.2218109995204, 00:28:46.942 "io_failed": 0, 00:28:46.942 "io_timeout": 0, 00:28:46.942 "avg_latency_us": 4645.507609115489, 00:28:46.942 "min_latency_us": 515.4133333333333, 00:28:46.942 "max_latency_us": 11304.96 00:28:46.942 } 00:28:46.942 ], 00:28:46.942 "core_count": 1 00:28:46.942 } 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.942 | select(.opcode=="crc32c") 00:28:46.942 | "\(.module_name) \(.executed)"' 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3830452 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3830452 ']' 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3830452 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3830452 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3830452' 00:28:46.942 killing process with pid 3830452 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3830452 00:28:46.942 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.942 00:28:46.942 Latency(us) 00:28:46.942 [2024-11-26T19:06:47.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.942 [2024-11-26T19:06:47.763Z] =================================================================================================================== 00:28:46.942 [2024-11-26T19:06:47.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.942 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3830452 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3831148 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3831148 /var/tmp/bperf.sock 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3831148 ']' 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.203 20:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:47.203 [2024-11-26 20:06:47.860550] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:47.203 [2024-11-26 20:06:47.860605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831148 ] 00:28:47.203 [2024-11-26 20:06:47.945722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.203 [2024-11-26 20:06:47.974398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.144 20:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.715 nvme0n1 00:28:48.715 20:06:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.715 20:06:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.715 Running I/O for 2 seconds... 00:28:50.597 29980.00 IOPS, 117.11 MiB/s [2024-11-26T19:06:51.418Z] 29734.00 IOPS, 116.15 MiB/s 00:28:50.597 Latency(us) 00:28:50.597 [2024-11-26T19:06:51.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.597 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.597 nvme0n1 : 2.01 29737.83 116.16 0.00 0.00 4297.27 2143.57 12397.23 00:28:50.597 [2024-11-26T19:06:51.418Z] =================================================================================================================== 00:28:50.597 [2024-11-26T19:06:51.418Z] Total : 29737.83 116.16 0.00 0.00 4297.27 2143.57 12397.23 00:28:50.597 { 00:28:50.597 "results": [ 00:28:50.597 { 00:28:50.597 "job": "nvme0n1", 00:28:50.597 "core_mask": "0x2", 00:28:50.597 "workload": "randwrite", 00:28:50.597 "status": "finished", 00:28:50.597 "queue_depth": 128, 00:28:50.597 "io_size": 4096, 00:28:50.597 "runtime": 2.005392, 00:28:50.597 "iops": 29737.826818896257, 00:28:50.597 "mibps": 116.1633860113135, 00:28:50.597 "io_failed": 0, 00:28:50.597 "io_timeout": 0, 00:28:50.597 "avg_latency_us": 4297.266127395085, 00:28:50.597 "min_latency_us": 2143.5733333333333, 00:28:50.597 "max_latency_us": 12397.226666666667 00:28:50.597 } 00:28:50.597 ], 00:28:50.597 "core_count": 1 00:28:50.597 } 00:28:50.597 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:50.597 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:50.597 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:50.597 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:50.597 | select(.opcode=="crc32c") 00:28:50.597 | "\(.module_name) \(.executed)"' 00:28:50.597 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3831148 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3831148 ']' 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3831148 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3831148 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3831148' 00:28:50.858 killing process with pid 3831148 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3831148 00:28:50.858 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.858 00:28:50.858 Latency(us) 00:28:50.858 [2024-11-26T19:06:51.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.858 [2024-11-26T19:06:51.679Z] =================================================================================================================== 00:28:50.858 [2024-11-26T19:06:51.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.858 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3831148 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3831993 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3831993 /var/tmp/bperf.sock 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3831993 ']' 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.119 20:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:51.119 [2024-11-26 20:06:51.789111] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:51.119 [2024-11-26 20:06:51.789175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831993 ] 00:28:51.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.119 Zero copy mechanism will not be used. 00:28:51.119 [2024-11-26 20:06:51.870412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.119 [2024-11-26 20:06:51.900001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.060 20:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.320 nvme0n1 00:28:52.320 20:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:52.320 20:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.580 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.580 Zero copy mechanism will not be used. 00:28:52.580 Running I/O for 2 seconds... 00:28:54.472 8725.00 IOPS, 1090.62 MiB/s [2024-11-26T19:06:55.293Z] 8198.50 IOPS, 1024.81 MiB/s 00:28:54.472 Latency(us) 00:28:54.472 [2024-11-26T19:06:55.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.472 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:54.472 nvme0n1 : 2.00 8197.44 1024.68 0.00 0.00 1948.61 1153.71 11031.89 00:28:54.472 [2024-11-26T19:06:55.293Z] =================================================================================================================== 00:28:54.472 [2024-11-26T19:06:55.293Z] Total : 8197.44 1024.68 0.00 0.00 1948.61 1153.71 11031.89 00:28:54.472 { 00:28:54.472 "results": [ 00:28:54.472 { 00:28:54.472 "job": "nvme0n1", 00:28:54.472 "core_mask": "0x2", 00:28:54.472 "workload": "randwrite", 00:28:54.472 "status": "finished", 00:28:54.472 "queue_depth": 16, 00:28:54.472 "io_size": 131072, 00:28:54.472 "runtime": 2.00221, 00:28:54.472 "iops": 8197.441826781407, 00:28:54.472 "mibps": 1024.680228347676, 00:28:54.472 "io_failed": 0, 00:28:54.472 "io_timeout": 0, 00:28:54.472 "avg_latency_us": 1948.6065533418632, 00:28:54.472 "min_latency_us": 1153.7066666666667, 00:28:54.472 "max_latency_us": 11031.893333333333 00:28:54.472 } 00:28:54.472 ], 00:28:54.472 "core_count": 1 00:28:54.472 } 00:28:54.472 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:54.472 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:54.472 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:54.472 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:54.472 | select(.opcode=="crc32c") 00:28:54.472 | "\(.module_name) \(.executed)"' 00:28:54.472 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3831993 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3831993 ']' 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3831993 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3831993 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3831993' 00:28:54.733 killing process with pid 3831993 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3831993 00:28:54.733 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.733 00:28:54.733 Latency(us) 00:28:54.733 [2024-11-26T19:06:55.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.733 [2024-11-26T19:06:55.554Z] =================================================================================================================== 00:28:54.733 [2024-11-26T19:06:55.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.733 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3831993 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3829425 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3829425 ']' 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3829425 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3829425 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3829425' 00:28:54.994 killing process with pid 3829425 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3829425 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3829425 00:28:54.994 00:28:54.994 real 0m16.915s 00:28:54.994 user 0m33.424s 00:28:54.994 sys 0m3.795s 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.994 ************************************ 00:28:54.994 END TEST nvmf_digest_clean 00:28:54.994 ************************************ 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.994 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.256 ************************************ 00:28:55.256 START TEST nvmf_digest_error 00:28:55.256 ************************************ 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3832855 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3832855 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3832855 ']' 00:28:55.256 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.257 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.257 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.257 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.257 20:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.257 [2024-11-26 20:06:55.883859] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:55.257 [2024-11-26 20:06:55.883914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.257 [2024-11-26 20:06:55.975763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.257 [2024-11-26 20:06:56.009185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.257 [2024-11-26 20:06:56.009213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.257 [2024-11-26 20:06:56.009220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.257 [2024-11-26 20:06:56.009225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.257 [2024-11-26 20:06:56.009229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.257 [2024-11-26 20:06:56.009736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 [2024-11-26 20:06:56.719689] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.199 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.200 null0 00:28:56.200 [2024-11-26 20:06:56.798791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.200 [2024-11-26 20:06:56.823001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3832986 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3832986 /var/tmp/bperf.sock 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3832986 ']' 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.200 20:06:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.200 [2024-11-26 20:06:56.878864] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:28:56.200 [2024-11-26 20:06:56.878914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3832986 ] 00:28:56.200 [2024-11-26 20:06:56.960387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.200 [2024-11-26 20:06:56.990097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.142 20:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.403 nvme0n1 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.403 20:06:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.664 Running I/O for 2 seconds... 00:28:57.664 [2024-11-26 20:06:58.279125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.279155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.279169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.287146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.287171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.287179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.297228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.297246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.297253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.306044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.306065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.306072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.315176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.315193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.315200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.323478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.323497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.323503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.333825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.333842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.333848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.342109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.342126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.351097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.351116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.351122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.362476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.362494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.362500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.370586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.370602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.370609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.381473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.381497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.391492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.391510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.391517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.401273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.401291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.401297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.409364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.409381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.409387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.664 [2024-11-26 20:06:58.418256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.664 [2024-11-26 20:06:58.418272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.664 [2024-11-26 20:06:58.418279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.427337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.427355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.427361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.436849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.436867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.436874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.446779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.446796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.446803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.455855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.455872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.455878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.464069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.464086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.464096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.665 [2024-11-26 20:06:58.473178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.665 [2024-11-26 20:06:58.473196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.665 [2024-11-26 20:06:58.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.483324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.483341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.483348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.491886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.491903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.491909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.501058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.501076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.501082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.509658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.509675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.509681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.519169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.519192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.529239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.927 [2024-11-26 20:06:58.529255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.927 [2024-11-26 20:06:58.529261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.927 [2024-11-26 20:06:58.539538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.539555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.539561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.547444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.547464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.547471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.556603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.556619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.556626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.566241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.566258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.566264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.574037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.574054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.574060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.584698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.584716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.584722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.595055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.595079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.603092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.603109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.603115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.612600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.612618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.612625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.621238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.621255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.621265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.630370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.630387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.630393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.639124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.639141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.639147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.647253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.647270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.647276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.657211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.657228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.657234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.667955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.667972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.667978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.676740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.676757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.676763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.684569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.684586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.684592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.694074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.694092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.704038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.704058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.704064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.712121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.712137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.712144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.720280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.720297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.720303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.729906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.928 [2024-11-26 20:06:58.729923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.928 [2024-11-26 20:06:58.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.928 [2024-11-26 20:06:58.738759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:57.929 [2024-11-26 20:06:58.738775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.929 [2024-11-26 20:06:58.738781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.748284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.748300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.748307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.756058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.756075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.756082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.766054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.766071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.766077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.775592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.775609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.775615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.785057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.785075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.792859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.792876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.792882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.802295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.802312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.811779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.820474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.820492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.820498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.830520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.830537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.830545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.838000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.838017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.838023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.848234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.848251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.848257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.857857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.857873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.857883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.868419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.868436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.868442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.876247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.876264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.876270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.885372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.885389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.885396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.896010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.896027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.896033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.904139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.904155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.904167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.913626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.913643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.913650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.923525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.923541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.923548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.932428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.932445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.932451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.940631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.191 [2024-11-26 20:06:58.940651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.191 [2024-11-26 20:06:58.940657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.191 [2024-11-26 20:06:58.950480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.950496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:58.959248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.959265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:58.967230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.967248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.967254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:58.977456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.977473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.977480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:58.987039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.987055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.987061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:58.996514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:58.996530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:58.996536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.192 [2024-11-26 20:06:59.005325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.192 [2024-11-26 20:06:59.005342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.192 [2024-11-26 20:06:59.005348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.015083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.015101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.023747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.023764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.023770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.034389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.034405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.034412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.042404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.042426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.051108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.051124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.051131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.060981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.060998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.061004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.069874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.069891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.069897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.079532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.079548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.079554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.088631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.088648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.088654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.097220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.097245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.097251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.106129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.106145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.106151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.114849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.114865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.114871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.124556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.124572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.124578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.132072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.132088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.132094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.142880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.142897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.142903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.151751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.151774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.160256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.160272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.454 [2024-11-26 20:06:59.160278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.454 [2024-11-26 20:06:59.168613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.454 [2024-11-26 20:06:59.168629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.168635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.178220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.178242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.186275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.186298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.195313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.195330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.195336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.205774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.205790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.205797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.214167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.214183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.214190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.222692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.222708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.222714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.231397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.231414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.231420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.240790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.240806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.240812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.250205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.250221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.250230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-11-26 20:06:59.259873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.455 [2024-11-26 20:06:59.259889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-11-26 20:06:59.259896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 27562.00 IOPS, 107.66 MiB/s [2024-11-26T19:06:59.538Z] [2024-11-26 20:06:59.271943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.271960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.282954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.282977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.292472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.292488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.292494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.301722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.301738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.311558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.311575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.311581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.320646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.320663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.320669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.330202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.330219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.330225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.338233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.338252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.338259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.347481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.347497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.347503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.356885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.356901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.356907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.366075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.366091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.366097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.374713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.374729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.374735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.383779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.383795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.383801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.393852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.393869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.393875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.403757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.403774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.403780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.412232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.412248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.412258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.420529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.717 [2024-11-26 20:06:59.420546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-11-26 20:06:59.420552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-11-26 20:06:59.430124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.430141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.430147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.439676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.439693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.448489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.448505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.448511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.457103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.457119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.466504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.466520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.466526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.476185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.476201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.476207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.484370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.484392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.492749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.492769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.492775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.502438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.502455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.502461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.510644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.510661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.510667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.519643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.519660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.519667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-11-26 20:06:59.528863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.718 [2024-11-26 20:06:59.528879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-11-26 20:06:59.528885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-11-26 20:06:59.538069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.980 [2024-11-26 20:06:59.538085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-11-26 20:06:59.538092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-11-26 20:06:59.546726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.980 [2024-11-26 20:06:59.546743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-11-26 20:06:59.546749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-11-26 20:06:59.556059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.980 [2024-11-26 20:06:59.556076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-11-26 20:06:59.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-11-26 20:06:59.567164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.980 [2024-11-26 20:06:59.567181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.567187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.576161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.576178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.576184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.585502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.585519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.585525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.594196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.594213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.594219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.603400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.603417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.603423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.612301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.612317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.612323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.621438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.621454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.621461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.629812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.629829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.629835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.639484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.639501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.639507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.648579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.648596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.657340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.657356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.657363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.666777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.666793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.666799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.675061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.675078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.675084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.684604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.684621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.684627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.693768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.693784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.693790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.702297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.702313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.702319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.711913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.711930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.711936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.720995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.721012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.721018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.730216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.730235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.739926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.739942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.749125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.749141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.758602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.758618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.758624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.768081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.768098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.768104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.776710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.776726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.776733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.784928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.784950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.981 [2024-11-26 20:06:59.794507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:58.981 [2024-11-26 20:06:59.794524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.981 [2024-11-26 20:06:59.794530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.804217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.804234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.804240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.812836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.812852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.812859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.822226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.822242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.822248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.830393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.830409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.830416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.839557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.839575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.839581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.851553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.851570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.851576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.861169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.861185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.861192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.869945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.869963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.869969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.878858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.878874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.878881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.887903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.887923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.887930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.896803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.896819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.896825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.905463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.905479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.905486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.914487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.245 [2024-11-26 20:06:59.914504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.245 [2024-11-26 20:06:59.914511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.245 [2024-11-26 20:06:59.924091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.924107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.924114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.934196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.934212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.934219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.943736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.943752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.943759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.951248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.951264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.951270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.962056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.962072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.962078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.970358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.970374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.978992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.979008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.979015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.987740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.987757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.987763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:06:59.996836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:06:59.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:06:59.996858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.005838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.005857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.005864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.015017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.015033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.015040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.024438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.024455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.024461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.033307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.033324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.033331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.041491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.041507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.041517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.051393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.051409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.051416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.246 [2024-11-26 20:07:00.060025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.246 [2024-11-26 20:07:00.060042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.246 [2024-11-26 20:07:00.060048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.508 [2024-11-26 20:07:00.069190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.508 [2024-11-26 20:07:00.069207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.508 [2024-11-26 20:07:00.069213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.508 [2024-11-26 20:07:00.078712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.508 [2024-11-26 20:07:00.078729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.508 [2024-11-26 20:07:00.078736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.508 [2024-11-26 20:07:00.090479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.508 [2024-11-26 20:07:00.090496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.508 [2024-11-26 20:07:00.090502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.508 [2024-11-26 20:07:00.099151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.508 [2024-11-26 20:07:00.099174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.508 [2024-11-26 20:07:00.099182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.508 [2024-11-26 20:07:00.106656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.508 [2024-11-26 20:07:00.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.508 [2024-11-26 20:07:00.106680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.117967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.117984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.117990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.128247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.128267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.128273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.137020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.137037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.137043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.145628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.145651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.155125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.155142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.155148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.164092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.164109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.164115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.172814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.172830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.172836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.180808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.180824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.180831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.191427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.191444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.191450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.200722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.200739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.200745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.209435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.209452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.217973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.217990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.217996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.228098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.228115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.228121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.237600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.237617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.237623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.245301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.245318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.245324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 [2024-11-26 20:07:00.254655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.254672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.254678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 27648.50 IOPS, 108.00 MiB/s [2024-11-26T19:07:00.330Z] [2024-11-26 20:07:00.264806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21ae190) 00:28:59.509 [2024-11-26 20:07:00.264823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.509 [2024-11-26 20:07:00.264829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.509 00:28:59.509 Latency(us) 00:28:59.509 [2024-11-26T19:07:00.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.509 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:59.509 nvme0n1 : 2.00 27656.44 108.03 0.00 0.00 4623.07 2129.92 16930.13 00:28:59.509 [2024-11-26T19:07:00.330Z] =================================================================================================================== 00:28:59.509 [2024-11-26T19:07:00.330Z] Total : 27656.44 108.03 0.00 0.00 4623.07 2129.92 16930.13 00:28:59.509 { 00:28:59.509 "results": [ 00:28:59.509 { 00:28:59.509 "job": "nvme0n1", 00:28:59.509 "core_mask": "0x2", 00:28:59.509 "workload": "randread", 00:28:59.509 "status": "finished", 00:28:59.509 "queue_depth": 128, 00:28:59.509 "io_size": 4096, 00:28:59.509 "runtime": 2.004054, 00:28:59.509 "iops": 27656.44039531869, 00:28:59.509 "mibps": 108.03297029421363, 00:28:59.509 "io_failed": 0, 00:28:59.509 "io_timeout": 0, 00:28:59.509 "avg_latency_us": 4623.07261488498, 00:28:59.509 "min_latency_us": 2129.92, 00:28:59.509 "max_latency_us": 16930.133333333335 00:28:59.509 } 00:28:59.509 ], 00:28:59.509 "core_count": 1 00:28:59.509 } 00:28:59.509 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.509 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.509 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.509 | .driver_specific 00:28:59.509 | .nvme_error 00:28:59.509 | .status_code 00:28:59.509 | .command_transient_transport_error' 00:28:59.509 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3832986 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3832986 ']' 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3832986 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832986 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832986' 00:28:59.771 killing process with pid 3832986 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3832986 00:28:59.771 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.771 00:28:59.771 Latency(us) 00:28:59.771 [2024-11-26T19:07:00.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.771 [2024-11-26T19:07:00.592Z] =================================================================================================================== 00:28:59.771 [2024-11-26T19:07:00.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.771 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3832986 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3833800 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3833800 /var/tmp/bperf.sock 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3833800 ']' 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.033 20:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.033 [2024-11-26 20:07:00.695744] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:00.033 [2024-11-26 20:07:00.695803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3833800 ] 00:29:00.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.033 Zero copy mechanism will not be used. 00:29:00.033 [2024-11-26 20:07:00.778107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.033 [2024-11-26 20:07:00.808058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.975 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.236 nvme0n1 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.236 20:07:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.236 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.236 Zero copy mechanism will not be used. 00:29:01.236 Running I/O for 2 seconds... 00:29:01.236 [2024-11-26 20:07:02.034596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.236 [2024-11-26 20:07:02.034632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.236 [2024-11-26 20:07:02.034640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.236 [2024-11-26 20:07:02.045121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.236 [2024-11-26 20:07:02.045145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.236 [2024-11-26 20:07:02.045152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.054476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.054496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.060738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.060756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.060763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.070546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.070564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.070570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.082438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.082456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.090964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.090990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.101890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.101909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.101915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.114376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.114394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.114401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.125276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.125295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.137129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.137148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.137154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.149823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.149842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.149848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.162254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.162273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.162280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.175010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.498 [2024-11-26 20:07:02.175028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.498 [2024-11-26 20:07:02.175035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.498 [2024-11-26 20:07:02.187722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.187741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.187747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.200084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.200104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.212050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.212069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.212075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.224448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.224468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.224478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.236441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.236460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.248050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.248069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.248076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.259944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.259962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.259968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.271983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.272002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.272008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.285020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.285038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.285044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.297455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.297481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.499 [2024-11-26 20:07:02.310167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.499 [2024-11-26 20:07:02.310185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.499 [2024-11-26 20:07:02.310192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.322935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.322953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.322960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.334240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.334263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.334269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.344283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.344302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.344308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.353294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.353312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.353318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.362757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.362775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.362781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.374546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.374565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.374571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.387293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.387311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.387318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.399830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.399848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.399854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.412085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.412104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.412110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.424435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.424453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.424460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.437041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.437060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.450506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.450524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.450530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.463408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.463426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.463432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.475413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.475432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.475438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.486618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.486637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.486643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.498612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.498631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.498637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.510784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.510802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.510808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.521871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.521889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.521895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.533397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.533415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.533424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.544624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.544642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.544648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.555195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.555214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.555220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.761 [2024-11-26 20:07:02.565490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:01.761 [2024-11-26 20:07:02.565509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.761 [2024-11-26 20:07:02.565515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 20:07:02.577691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.023 [2024-11-26 20:07:02.577711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 20:07:02.577717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 20:07:02.589705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.023 [2024-11-26 20:07:02.589724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 20:07:02.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 20:07:02.601921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.023 [2024-11-26 20:07:02.601940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 20:07:02.601946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 20:07:02.613418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.023 [2024-11-26 20:07:02.613436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.613442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.625399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.625417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.625423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.636801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.636819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.636825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.649213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.649231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.649237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.661188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.661206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.661212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.673438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.673457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.673464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.685892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.685917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.698387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.698405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.698412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.709980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.709998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.710005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.721778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.721797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.721803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.732555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.732573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.732583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.744950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.744968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.744975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.753052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.753070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.753076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.763948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.763965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.763971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.774476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.774494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.774500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.786320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.786338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.786344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.795763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.795781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.795787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.807912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.807929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.819646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.819663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.819669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.830168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.830189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.830195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 20:07:02.838809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.024 [2024-11-26 20:07:02.838827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 20:07:02.838833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.849623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.849647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.860567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.860585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.860592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.872602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.872619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.872626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.885102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.885120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.885126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.896621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.896639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.896645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.908860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.908876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.908883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.920657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.920674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.920680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.934007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.934025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.934032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.946384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.946401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.946407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.957194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.957213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.957219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.965707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.965724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.965731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.974664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.974688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.984274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.984292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.984298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:02.996687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:02.996705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:02.996711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:03.004747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.287 [2024-11-26 20:07:03.004765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.287 [2024-11-26 20:07:03.004771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.287 [2024-11-26 20:07:03.013056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.013074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.013084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.288 2697.00 IOPS, 337.12 MiB/s [2024-11-26T19:07:03.109Z] [2024-11-26 20:07:03.024228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.024246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.024252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.033488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.033506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.033512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.045698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.045715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.045722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.057322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.057341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.057347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.068019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.068037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.068043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.080167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.080186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.080192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.288 [2024-11-26 20:07:03.093358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.288 [2024-11-26 20:07:03.093376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.288 [2024-11-26 20:07:03.093382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.549 [2024-11-26 20:07:03.105258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.549 [2024-11-26 20:07:03.105277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.549 [2024-11-26 20:07:03.105283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.549 [2024-11-26 20:07:03.116390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.549 [2024-11-26 20:07:03.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.549 [2024-11-26 20:07:03.116419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.549 [2024-11-26 20:07:03.124453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.124472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.124478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.132401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.132419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.132426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.138200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.138218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.138224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.147458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.147476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.147483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.152298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.152317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.152323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.157026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.157043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.157049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.164819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.164838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.164844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.175651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.175669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.175675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.182755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.182773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.182779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.189616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.189635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.198750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.198768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.198775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.204038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.204055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.204061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.212070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.212088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.212095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.220767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.220785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.220792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.233237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.233256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.233262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.246267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.246285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.246291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.258360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.258378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.258387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.270527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.270545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.270552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.282153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.282176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.282182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.295054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.295073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.295079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.306554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.550 [2024-11-26 20:07:03.306572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.550 [2024-11-26 20:07:03.306579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.550 [2024-11-26 20:07:03.319339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.551 [2024-11-26 20:07:03.319358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.551 [2024-11-26 20:07:03.319364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.551 [2024-11-26 20:07:03.331547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.551 [2024-11-26 20:07:03.331565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.551 [2024-11-26 20:07:03.331572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.551 [2024-11-26 20:07:03.343364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.551 [2024-11-26 20:07:03.343382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.551 [2024-11-26 20:07:03.343388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.551 [2024-11-26 20:07:03.354700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.551 [2024-11-26 20:07:03.354718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.551 [2024-11-26 20:07:03.354725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.366372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.366391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.366397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.375136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.375154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.375164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.380559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.380576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.387047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.387065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.387071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.392323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.392341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.392347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.813 [2024-11-26 20:07:03.401555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.813 [2024-11-26 20:07:03.401573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.813 [2024-11-26 20:07:03.401579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.409281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.409298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.409305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.414073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.414091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.414098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.422919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.422936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.422946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.429035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.429053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.429060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.436836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.436854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.436860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.442369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.442393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.452407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.452425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.452431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.460350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.460368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.460374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.466215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.466233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.466239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.471285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.471303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.471309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.479456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.479474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.479480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.488515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.488536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.488542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.497080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.497098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.497105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.502615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.502632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.502638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.511065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.511083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.511089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.516669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.516693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.523657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.523675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.523681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.531622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.531640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.531647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.539098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.539116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.539122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.546463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.546481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.546488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.814 [2024-11-26 20:07:03.556164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.814 [2024-11-26 20:07:03.556182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.814 [2024-11-26 20:07:03.556188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.567352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.567371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.567377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.576016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.576035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.576041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.581953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.581972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.581978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.587588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.587607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.587613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.595395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.595413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.603531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.603549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.603555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.608754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.608772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.608779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.613630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.613649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.613658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.618725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.618743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.624155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.624177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.624183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.815 [2024-11-26 20:07:03.627996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:02.815 [2024-11-26 20:07:03.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.815 [2024-11-26 20:07:03.628021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.632571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.632590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.632596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.636986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.637003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.637010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.641677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.641696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.641702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.649428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.649447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.649453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.656407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.656426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.656432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.666924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.666950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.666956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.675623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.675641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.675647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.683934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.683951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.683957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.690759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.690777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.690783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.700843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.700862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.700868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.709955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.709973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.709979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.715311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.715336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.724441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.724460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.724466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.734677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.734695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.734705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.739202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.739220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.739226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.747819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.747837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.747843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.755793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.755810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.762344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.762363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.762369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.773450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.773468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.773474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.781462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.781480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.078 [2024-11-26 20:07:03.781486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.078 [2024-11-26 20:07:03.791199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.078 [2024-11-26 20:07:03.791217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.791223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.799986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.800004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.800010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.807421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.807449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.818396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.818414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.818421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.829758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.829776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.829782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.838452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.838470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.838476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.847201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.847219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.856183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.856200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.856207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.865646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.865664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.865670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.874538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.874557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.874564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.880606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.880625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.880631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.885036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.885054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.885061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.079 [2024-11-26 20:07:03.890017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.079 [2024-11-26 20:07:03.890035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.079 [2024-11-26 20:07:03.890041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.897347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.897366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.897372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.904483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.904502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.904508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.914111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.914130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.914136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.919358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.919377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.919383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.929878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.929903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.937374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.937392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.937398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.949623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.949642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.949651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.959205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.959224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.959230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.964002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.964020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.964026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.969502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.969520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.969526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.974116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.974134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.974140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.979006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.979024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.979030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.983381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.983399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.983405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.988745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.988764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.988770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.992969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.992987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.992993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:03.996902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.342 [2024-11-26 20:07:03.996923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.342 [2024-11-26 20:07:03.996930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.342 [2024-11-26 20:07:04.003512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.343 [2024-11-26 20:07:04.003530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.343 [2024-11-26 20:07:04.003536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.343 [2024-11-26 20:07:04.011476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.343 [2024-11-26 20:07:04.011495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.343 [2024-11-26 20:07:04.011501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.343 [2024-11-26 20:07:04.016287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.343 [2024-11-26 20:07:04.016304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.343 [2024-11-26 20:07:04.016310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.343 3270.00 IOPS, 408.75 MiB/s [2024-11-26T19:07:04.164Z] [2024-11-26 20:07:04.026361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1376570) 00:29:03.343 [2024-11-26 20:07:04.026379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.343 [2024-11-26 20:07:04.026385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.343 00:29:03.343 Latency(us) 00:29:03.343 [2024-11-26T19:07:04.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:03.343 nvme0n1 : 2.00 3273.84 409.23 0.00 0.00 4883.07 631.47 19005.44 00:29:03.343 [2024-11-26T19:07:04.164Z] =================================================================================================================== 00:29:03.343 [2024-11-26T19:07:04.164Z] Total : 3273.84 409.23 0.00 0.00 4883.07 631.47 19005.44 00:29:03.343 { 00:29:03.343 "results": [ 00:29:03.343 { 00:29:03.343 "job": "nvme0n1", 00:29:03.343 "core_mask": "0x2", 00:29:03.343 "workload": "randread", 00:29:03.343 "status": "finished", 00:29:03.343 "queue_depth": 16, 00:29:03.343 "io_size": 131072, 00:29:03.343 "runtime": 2.002539, 00:29:03.343 "iops": 3273.843855225791, 00:29:03.343 "mibps": 409.23048190322385, 00:29:03.343 "io_failed": 0, 00:29:03.343 "io_timeout": 0, 00:29:03.343 "avg_latency_us": 4883.0714012609305, 00:29:03.343 "min_latency_us": 631.4666666666667, 00:29:03.343 "max_latency_us": 19005.44 00:29:03.343 } 00:29:03.343 ], 00:29:03.343 "core_count": 1 00:29:03.343 } 00:29:03.343 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.343 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.343 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.343 | .driver_specific 00:29:03.343 | .nvme_error 00:29:03.343 | .status_code 00:29:03.343 | .command_transient_transport_error' 00:29:03.343 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3833800 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3833800 ']' 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3833800 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3833800 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3833800' 00:29:03.604 killing process with pid 3833800 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3833800 00:29:03.604 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.604 00:29:03.604 Latency(us) 00:29:03.604 [2024-11-26T19:07:04.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.604 [2024-11-26T19:07:04.425Z] =================================================================================================================== 00:29:03.604 [2024-11-26T19:07:04.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3833800 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3834571 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3834571 /var/tmp/bperf.sock 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3834571 ']' 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.604 20:07:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.864 [2024-11-26 20:07:04.447323] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:03.864 [2024-11-26 20:07:04.447378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834571 ] 00:29:03.864 [2024-11-26 20:07:04.529031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.864 [2024-11-26 20:07:04.558576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.805 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.066 nvme0n1 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.066 20:07:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.066 Running I/O for 2 seconds... 00:29:05.066 [2024-11-26 20:07:05.820852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.066 [2024-11-26 20:07:05.821719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.066 [2024-11-26 20:07:05.821746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:05.066 [2024-11-26 20:07:05.829597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016eeff18 00:29:05.066 [2024-11-26 20:07:05.830435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.066 [2024-11-26 20:07:05.830454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.066 [2024-11-26 20:07:05.838156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef0788 00:29:05.066 [2024-11-26 20:07:05.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.066 [2024-11-26 20:07:05.839017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.066 [2024-11-26 20:07:05.846716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016eeff18 00:29:05.066 [2024-11-26 20:07:05.847537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.067 [2024-11-26 20:07:05.847553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.067 [2024-11-26 20:07:05.855576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.067 [2024-11-26 20:07:05.856535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.067 [2024-11-26 20:07:05.856552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.067 [2024-11-26 20:07:05.864410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.067 [2024-11-26 20:07:05.864666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.067 [2024-11-26 20:07:05.864681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.067 [2024-11-26 20:07:05.873192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.067 [2024-11-26 20:07:05.873413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.067 [2024-11-26 20:07:05.873428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.067 [2024-11-26 20:07:05.882067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.067 [2024-11-26 20:07:05.882317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.067 [2024-11-26 20:07:05.882334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.890879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.891141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.891168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.899680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.899892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.899906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.908436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.908650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.908665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.917220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.917361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.926008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.926247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.926262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.934707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.935026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.935042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.943451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.943723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.943738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.952309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.329 [2024-11-26 20:07:05.952536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.329 [2024-11-26 20:07:05.952551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.329 [2024-11-26 20:07:05.961089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:05.961318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:05.961333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:05.969833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:05.969954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:05.969969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:05.978568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:05.978783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:05.978798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:05.987308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:05.987586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:05.987602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:05.996102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:05.996341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:05.996362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.004805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.005124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.005140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.013601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.013861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.022332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.022588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.022610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.031104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.031390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.031405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.039914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.040154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.048738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.048989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.057563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.057814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.057830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.066418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.066678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.066692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.075250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.075499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.075514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.083971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.084088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.084103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.092830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.093080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.093102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.101566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.101775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.101789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.110321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.110550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.110565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.119104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.119399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.127824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.127981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.127996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.136560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.136828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.136844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.330 [2024-11-26 20:07:06.145282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.330 [2024-11-26 20:07:06.145545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.330 [2024-11-26 20:07:06.145563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.154103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.154338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.154354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.162819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.163086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.163101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.171553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.171830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.180384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.180505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.189138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.197897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.198164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.198179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.206629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.215394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.215649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.215664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.224101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.593 [2024-11-26 20:07:06.224344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.593 [2024-11-26 20:07:06.224359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.593 [2024-11-26 20:07:06.232875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.233115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.233129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.241640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.241871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.241886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.250356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.250622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.250637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.259101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.259359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.259374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.267820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.268055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.268070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.276552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.276813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.276828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.285307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.285558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.285582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.294108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.294353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.294368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.302868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.303148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.303168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.311616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.311837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.311852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.320352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.320567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.320581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.329090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.329427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.337918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.338165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.338179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.346645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.346897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.346912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.355409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.355658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.364114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.364461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.364477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.372909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.373171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.381621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.381882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.381906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.390363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.399102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.399383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.399398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.594 [2024-11-26 20:07:06.407824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.594 [2024-11-26 20:07:06.408089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.594 [2024-11-26 20:07:06.408105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.857 [2024-11-26 20:07:06.416572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.857 [2024-11-26 20:07:06.416808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.857 [2024-11-26 20:07:06.416824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.857 [2024-11-26 20:07:06.425319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.857 [2024-11-26 20:07:06.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.857 [2024-11-26 20:07:06.425619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.857 [2024-11-26 20:07:06.434041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.857 [2024-11-26 20:07:06.434302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.434317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.442793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.443089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.451601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.451871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.451887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.460332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.460592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.460607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.469099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.469368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.469390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.477841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.478052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.478066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.486608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.486867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.486883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.495349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.495605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.495620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.504102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.504393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.504409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.512828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.513097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.513112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.521592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.521807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.521822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.530401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.530663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.530678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.539201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.539429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.539444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.547905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.548180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.548195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.556731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.556947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.556962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.565471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.565756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.574233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.574485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.574500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.583155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.583438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.583453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.591929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.592227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.600703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.600964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.600981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.609432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.858 [2024-11-26 20:07:06.609670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.858 [2024-11-26 20:07:06.618173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.858 [2024-11-26 20:07:06.618416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.618432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.626900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.627117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.627132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.635640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.635907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.644412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.644644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.653187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.653456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.653478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.661974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.662222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.662236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:05.859 [2024-11-26 20:07:06.670810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:05.859 [2024-11-26 20:07:06.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.859 [2024-11-26 20:07:06.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.679591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.679846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.679862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.688395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.688634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.688649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.697165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.697381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.697396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.705901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.706118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.706133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.714632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.714916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.714932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.723381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.723640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.723655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.732181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.732318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.732333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.740948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.741213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.749675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.749944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.749960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.758413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.758626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.758641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.767142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.767391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.767405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.775870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.776075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.776090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.784593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.784799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.784814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.793352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.122 [2024-11-26 20:07:06.793623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.122 [2024-11-26 20:07:06.793639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.122 [2024-11-26 20:07:06.802074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.802328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.802344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 28875.00 IOPS, 112.79 MiB/s [2024-11-26T19:07:06.944Z] [2024-11-26 20:07:06.810877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.811114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.811129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.819612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.819881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.819896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.828372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.828631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.828648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.837126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.837408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.845880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.846106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.846121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.854630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.854859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.863406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.863660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.863676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.872234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.872484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.872499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.880947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.881210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.881225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.889799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.890042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.890057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.898518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.898734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.898749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.907289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.907543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.907558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.916036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.916280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.916295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.924851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.925090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.925104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.123 [2024-11-26 20:07:06.933534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.123 [2024-11-26 20:07:06.933807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.123 [2024-11-26 20:07:06.933823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.942330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.942548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.951074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.951331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.951347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.959811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.960066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.960081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.968587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.968839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.968855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.977328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.977542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.977557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.986038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.986311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.986327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:06.994761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:06.995054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:06.995070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.003576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:07.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:07.003841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.012319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:07.012439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:07.012454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.021061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:07.021284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:07.021299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.029805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:07.029924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:07.029939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.038531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.385 [2024-11-26 20:07:07.038784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.385 [2024-11-26 20:07:07.038798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.385 [2024-11-26 20:07:07.047268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.047516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.056012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.056254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.056272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.064772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.065011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.065027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.073522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.073773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.073787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.082307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.082568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.091032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.091271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.091286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.099794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.100047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.100069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.108583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.108820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.108835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.117299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.117525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.117540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.126062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.126191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.126207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.134791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.135010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.135027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.143583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.143869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.143885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.152341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.152560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.161077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.161386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.161402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.169896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.170168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.170183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.178711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.178958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.178974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.187486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.187736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.187750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.386 [2024-11-26 20:07:07.196245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.386 [2024-11-26 20:07:07.196464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.386 [2024-11-26 20:07:07.196479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.204979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.205226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.205248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.213804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.213935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.213950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.222530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.222792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.222815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.231299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.231424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.231439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.240074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.240352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.248810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.249070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.257551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.257802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.257817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.266285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.266527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.266542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.275021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.275276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.275291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.283752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.284014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.284032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.292537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.292762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.292777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.649 [2024-11-26 20:07:07.301256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.649 [2024-11-26 20:07:07.301487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.649 [2024-11-26 20:07:07.301502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.309979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.310211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.310227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.318781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.318905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.318920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.327490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.327726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.327741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.336243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.336475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.336489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.344976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.345250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.345265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.353804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.354091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.362557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.362793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.362808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.371359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.371628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.380123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.380366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.380387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.388896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.389114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.389129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.397636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.397888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.397904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.406375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.406641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.406657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.415122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.415401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.415416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.423917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.424181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.424197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.432720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.432931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.432952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.441526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.441785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.441800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.450269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.450529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.450545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.650 [2024-11-26 20:07:07.459041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.650 [2024-11-26 20:07:07.459281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.650 [2024-11-26 20:07:07.459296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.467769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.467976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.467991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.476579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.476831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.476846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.485339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.485568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.485582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.494072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.494334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.494350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.502839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.503094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.503110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.511636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.511853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.511868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.520407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.520677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.520692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.529168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.529421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.537924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.538182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.538197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.546677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.546961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.546977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.555413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.555661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.555676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.564177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.564440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.572932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.573050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.573065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.581803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.582068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.582091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.590561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.599310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.599539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.599554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.608099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.608367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.608383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.616873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.617099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.617113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.625648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.912 [2024-11-26 20:07:07.625907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.912 [2024-11-26 20:07:07.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.912 [2024-11-26 20:07:07.634394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.634642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.634664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.643164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.643394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.643409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.651931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.652171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.652186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.660661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.660932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.660950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.669437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.669659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.669674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.678251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.678471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.678486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.687002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.687237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.687252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.695794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.696035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.696050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.704574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.704844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.704860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.713386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.713610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.713626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.913 [2024-11-26 20:07:07.722116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:06.913 [2024-11-26 20:07:07.722372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.913 [2024-11-26 20:07:07.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.730891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.731128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.731142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.739638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.739903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.739919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.748427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.748695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.757202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.757444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.757460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.765932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.766162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.766177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.774744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.774984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.775008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.783529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.783809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.792286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.792509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.792524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 [2024-11-26 20:07:07.801075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.801211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.801227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 29020.50 IOPS, 113.36 MiB/s [2024-11-26T19:07:07.995Z] [2024-11-26 20:07:07.809825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d3d0) with pdu=0x200016ef4298 00:29:07.174 [2024-11-26 20:07:07.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.174 [2024-11-26 20:07:07.810118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:07.174 00:29:07.174 Latency(us) 00:29:07.174 [2024-11-26T19:07:07.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.174 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.174 nvme0n1 : 2.01 29022.53 113.37 0.00 0.00 4402.69 2170.88 16056.32 00:29:07.174 [2024-11-26T19:07:07.995Z] =================================================================================================================== 00:29:07.174 [2024-11-26T19:07:07.995Z] Total : 29022.53 113.37 0.00 0.00 4402.69 2170.88 16056.32 00:29:07.174 { 00:29:07.174 "results": [ 00:29:07.174 { 00:29:07.174 "job": "nvme0n1", 00:29:07.174 "core_mask": "0x2", 00:29:07.174 "workload": "randwrite", 00:29:07.174 "status": "finished", 00:29:07.174 "queue_depth": 128, 00:29:07.174 "io_size": 4096, 00:29:07.174 "runtime": 2.005649, 00:29:07.174 "iops": 29022.525875664185, 00:29:07.174 "mibps": 113.36924170181322, 00:29:07.174 "io_failed": 0, 00:29:07.174 "io_timeout": 0, 00:29:07.174 "avg_latency_us": 4402.690190176777, 00:29:07.174 "min_latency_us": 2170.88, 00:29:07.174 "max_latency_us": 16056.32 00:29:07.174 } 00:29:07.174 ], 00:29:07.174 "core_count": 1 00:29:07.174 } 00:29:07.174 20:07:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.174 20:07:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.174 20:07:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.174 | .driver_specific 00:29:07.174 | .nvme_error 00:29:07.174 | .status_code 00:29:07.174 | .command_transient_transport_error' 00:29:07.174 20:07:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.435 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:29:07.435 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3834571 00:29:07.435 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3834571 ']' 00:29:07.435 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3834571 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3834571 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3834571' 00:29:07.436 killing process with pid 3834571 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3834571 00:29:07.436 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.436 00:29:07.436 Latency(us) 00:29:07.436 [2024-11-26T19:07:08.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.436 [2024-11-26T19:07:08.257Z] =================================================================================================================== 00:29:07.436 [2024-11-26T19:07:08.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3834571 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3835258 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3835258 /var/tmp/bperf.sock 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3835258 ']' 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.436 20:07:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.436 [2024-11-26 20:07:08.228452] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:07.436 [2024-11-26 20:07:08.228506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835258 ] 00:29:07.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.436 Zero copy mechanism will not be used. 00:29:07.696 [2024-11-26 20:07:08.311899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.696 [2024-11-26 20:07:08.340095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.267 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.267 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:08.267 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.267 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.528 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.100 nvme0n1 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.100 20:07:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.100 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.100 Zero copy mechanism will not be used. 00:29:09.100 Running I/O for 2 seconds... 00:29:09.100 [2024-11-26 20:07:09.777509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.777811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.787015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.795723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.796011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.807320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.807585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.807602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.818920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.819201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.819218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.830319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.830550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.830566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.842062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.842363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.842380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.853499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.853753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.853773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.865072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.865336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.865352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.876983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.877201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.877216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.889016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.889302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.889318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.100 [2024-11-26 20:07:09.900536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.100 [2024-11-26 20:07:09.900765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.100 [2024-11-26 20:07:09.900780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.101 [2024-11-26 20:07:09.911916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.101 [2024-11-26 20:07:09.912148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.101 [2024-11-26 20:07:09.912169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.923513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.923761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.923777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.934747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.935035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.935052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.946460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.946723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.946739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.958178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.958417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.969360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.969676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.969693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.980841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.981111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.981127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:09.991902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:09.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:09.992151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.003514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.003816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.003833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.015228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.015533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.015550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.027172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.027296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.027313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.038905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.039129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.039145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.050691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.050951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.050966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.061528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.061836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.061852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.073352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.073589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.073605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.083499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.083770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.083785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.094147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.094461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.094478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.105784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.363 [2024-11-26 20:07:10.106021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.363 [2024-11-26 20:07:10.106037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.363 [2024-11-26 20:07:10.116145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.116212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.116228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.123033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.123303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.123318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.133755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.133814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.133830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.140458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.140751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.140771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.149517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.149746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.149762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.159716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.159998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.160014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.364 [2024-11-26 20:07:10.169931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.364 [2024-11-26 20:07:10.170192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.364 [2024-11-26 20:07:10.170207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.626 [2024-11-26 20:07:10.180662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.626 [2024-11-26 20:07:10.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.626 [2024-11-26 20:07:10.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.626 [2024-11-26 20:07:10.189269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.626 [2024-11-26 20:07:10.189605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.626 [2024-11-26 20:07:10.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.626 [2024-11-26 20:07:10.200404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.626 [2024-11-26 20:07:10.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.626 [2024-11-26 20:07:10.200482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.626 [2024-11-26 20:07:10.207612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.626 [2024-11-26 20:07:10.207659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.626 [2024-11-26 20:07:10.207674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.626 [2024-11-26 20:07:10.216275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.626 [2024-11-26 20:07:10.216557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.626 [2024-11-26 20:07:10.216572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.226156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.226442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.226457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.234075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.234157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.234177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.242814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.243120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.243136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.251529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.251773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.251788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.262183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.262450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.262465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.273100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.273397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.284990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.285267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.285282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.296116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.296395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.296410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.308308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.308550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.308565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.319857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.320109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.320125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.330469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.330735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.330750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.342316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.342516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.342531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.353654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.353979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.353994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.364856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.364907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.364922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.369305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.369528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.369542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.375574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.375631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.375646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.385337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.385614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.385636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.394301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.394602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.394623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.399018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.399277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.408271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.408526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.408543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.418476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.418539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.418554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.427968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.428186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.428202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.627 [2024-11-26 20:07:10.438151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.627 [2024-11-26 20:07:10.438452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.627 [2024-11-26 20:07:10.438468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.449324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.449570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.449585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.460703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.460967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.460983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.472278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.472559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.472575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.484007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.484270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.495868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.496123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.496138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.507550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.507769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.507784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.518530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.518826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.518842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.529858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.530136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.530152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.541659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.541995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.552712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.552753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.552768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.562578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.562901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.562917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.570766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.570815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.570831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.579703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.580005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.580021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.589165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.589491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.589507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.594970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.595211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.595226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.603092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.603140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.603155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.613001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.613065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.613080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.622167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.622230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.622245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.633311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.633620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.633636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.645293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.645560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.657142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.657451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.667585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.667877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.667894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.673958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.674277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.674293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.680653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.680729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.687567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.687624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.687640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.696038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.696338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.696353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.890 [2024-11-26 20:07:10.703261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:09.890 [2024-11-26 20:07:10.703341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.890 [2024-11-26 20:07:10.703356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.152 [2024-11-26 20:07:10.709977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.152 [2024-11-26 20:07:10.710268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.152 [2024-11-26 20:07:10.710285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.152 [2024-11-26 20:07:10.720064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.152 [2024-11-26 20:07:10.720344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.152 [2024-11-26 20:07:10.720359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.152 [2024-11-26 20:07:10.728288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.152 [2024-11-26 20:07:10.728350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.152 [2024-11-26 20:07:10.728365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.152 [2024-11-26 20:07:10.735514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.152 [2024-11-26 20:07:10.735788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.152 [2024-11-26 20:07:10.735805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.152 [2024-11-26 20:07:10.743329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.152 [2024-11-26 20:07:10.743636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.743652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.753284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.753588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.759896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.759964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.759979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.767971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.768207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.775165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.776281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.776297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 3100.00 IOPS, 387.50 MiB/s [2024-11-26T19:07:10.974Z] [2024-11-26 20:07:10.782083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.782141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.782156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.785234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.785383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.785398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.788062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.788205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.788220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.796682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.796748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.796764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.801970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.802262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.802279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.810168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.810460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.818641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.818950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.818967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.823119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.823277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.823293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.826316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.826467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.826483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.833759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.833911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.833927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.838379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.838525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.838544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.841954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.842137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.842153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.847990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.848310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.848327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.857669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.857918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.857934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.867667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.867961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.867976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.878868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.879183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.879199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.889121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.889346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.889362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.899688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.899973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.899989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.910732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.910807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.910822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.921228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.921496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.931739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.932009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.932025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.942523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.942600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.153 [2024-11-26 20:07:10.942615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.153 [2024-11-26 20:07:10.953593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.153 [2024-11-26 20:07:10.953839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.154 [2024-11-26 20:07:10.953854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.154 [2024-11-26 20:07:10.963187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.154 [2024-11-26 20:07:10.963264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.154 [2024-11-26 20:07:10.963279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:10.973143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:10.973443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:10.973459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:10.983077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:10.983128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:10.983144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:10.988391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:10.988442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:10.988457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:10.992704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:10.992758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:10.992773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:10.999382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:10.999443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:10.999459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.004020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:11.004096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:11.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.009592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:11.009878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:11.009892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.018381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:11.018668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:11.018683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.027263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:11.027306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:11.027321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.032331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.415 [2024-11-26 20:07:11.032391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.415 [2024-11-26 20:07:11.032406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.415 [2024-11-26 20:07:11.040155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.040444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.040460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.050091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.050399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.056568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.056612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.056630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.061029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.061073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.061088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.064622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.064682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.068726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.068777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.068792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.072674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.072718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.072733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.076218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.076263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.076278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.079780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.079835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.079850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.083531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.083584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.083599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.088936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.088981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.088997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.095806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.095864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.095880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.101562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.101620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.101635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.107570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.107680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.107695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.111662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.111715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.111730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.115864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.115946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.115961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.121601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.121667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.121682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.128911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.129189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.129204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.134145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.134213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.134228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.139973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.140276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.140292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.149111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.149393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.149409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.156896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.157285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.157300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.166825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.166889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.166904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.177653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.177871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.177885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.189119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.189373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.189388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.201060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.201416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.416 [2024-11-26 20:07:11.201433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.416 [2024-11-26 20:07:11.211342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.416 [2024-11-26 20:07:11.211414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-11-26 20:07:11.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.417 [2024-11-26 20:07:11.220094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.417 [2024-11-26 20:07:11.220234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-11-26 20:07:11.220249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.417 [2024-11-26 20:07:11.226339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.417 [2024-11-26 20:07:11.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.417 [2024-11-26 20:07:11.226645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.236632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.237015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.237031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.244189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.244234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.244250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.248676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.248720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.252697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.252751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.252766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.256639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.256695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.256710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.260707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.260773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.264264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.264332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.264347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.268577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.268630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.268645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.272620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.272686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.272701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.276509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.276566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.276581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.280934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.281128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.281143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.288491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.288559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.288575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.292544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.292589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.292604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.298767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.299052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.299068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.303295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.303359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.303375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.307900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.307972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.680 [2024-11-26 20:07:11.307987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.680 [2024-11-26 20:07:11.311327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.680 [2024-11-26 20:07:11.311388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.311403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.315692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.315768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.315783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.319665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.319729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.326261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.326327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.326342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.332814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.332891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.339997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.340039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.340054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.346085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.346148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.346168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.351718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.351950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.351965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.358072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.358133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.358149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.361646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.361712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.361730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.364597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.364661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.364676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.367456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.367503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.367518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.370525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.370582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.370597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.373593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.373649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.373664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.376413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.376462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.376477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.379243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.379303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.381799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.381860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.381875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.384585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.384641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.384656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.387820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.387884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.387899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.390975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.391034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.391049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.395198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.395257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.401259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.401329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.401345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.406522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.406566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.406581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.409043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.409100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.411599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.411646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.411661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.414120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.414173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.414188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.416679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.416733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.681 [2024-11-26 20:07:11.416748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.681 [2024-11-26 20:07:11.419225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.681 [2024-11-26 20:07:11.419295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.419309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.421756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.421802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.421817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.424346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.424396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.424411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.426853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.426919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.426934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.430020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.430063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.430078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.434283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.434324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.434340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.437391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.437447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.437463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.440176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.440220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.440235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.443454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.443566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.443584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.447067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.447385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.457197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.457418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.457432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.463039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.463184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.467969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.468021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.468036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.474484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.474806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.474822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.682 [2024-11-26 20:07:11.484513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.682 [2024-11-26 20:07:11.484590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.682 [2024-11-26 20:07:11.484605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.494902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.495143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.495162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.504675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.504959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.514173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.514295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.514310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.522952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.523017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.523032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.533608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.533854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.533869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.542880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.543134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.543151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.553491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.553771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.553787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.560196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.560293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.560308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.564250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.564355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.564370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.571344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.571582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.576540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.576608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.580343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.580388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.580403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.583586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.583643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.583658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.587349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.587416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.587431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.591222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.591266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.591281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.594811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.594876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.598332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.598381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.598397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.601922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.602017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.606957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.944 [2024-11-26 20:07:11.607004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-11-26 20:07:11.607019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.944 [2024-11-26 20:07:11.610380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.610587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.610604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.613934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.613979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.613994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.617434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.617478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.617493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.622086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.622151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.622172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.625262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.625329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.625344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.628129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.628216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.628231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.630712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.630756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.630771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.633308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.633368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.635891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.635949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.635964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.638474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.638554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.638569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.641037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.641081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.641096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.643605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.643668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.643683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.646626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.646673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.646688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.649499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.649566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.649581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.652674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.652733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.652748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.655190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.655240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.655255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.657870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.657942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.657957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.661143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.661212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.661228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.663693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.663736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.663751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.666567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.666654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.666669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.675497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.675790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.675806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.684990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.685097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.685112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.695411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.695670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.706056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.706362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.706377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.716531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.716799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.716814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.726916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.727182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.727197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.736713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.736981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-11-26 20:07:11.737000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.945 [2024-11-26 20:07:11.747445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.945 [2024-11-26 20:07:11.747704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-11-26 20:07:11.747719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.946 [2024-11-26 20:07:11.757376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:10.946 [2024-11-26 20:07:11.757500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-11-26 20:07:11.757514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.206 [2024-11-26 20:07:11.767264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:11.206 [2024-11-26 20:07:11.767569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.206 [2024-11-26 20:07:11.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.206 [2024-11-26 20:07:11.774461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:11.206 [2024-11-26 20:07:11.774554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.206 [2024-11-26 20:07:11.774569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.206 4213.00 IOPS, 526.62 MiB/s [2024-11-26T19:07:12.027Z] [2024-11-26 20:07:11.778968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x193d710) with pdu=0x200016eff3c8 00:29:11.206 [2024-11-26 20:07:11.779012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.206 [2024-11-26 20:07:11.779027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.206 00:29:11.206 Latency(us) 00:29:11.206 [2024-11-26T19:07:12.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.206 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.206 nvme0n1 : 2.00 4216.34 527.04 0.00 0.00 3790.68 1208.32 12615.68 00:29:11.206 [2024-11-26T19:07:12.027Z] =================================================================================================================== 00:29:11.206 [2024-11-26T19:07:12.027Z] Total : 4216.34 527.04 0.00 0.00 3790.68 1208.32 12615.68 00:29:11.206 { 00:29:11.206 "results": [ 00:29:11.206 { 00:29:11.206 "job": "nvme0n1", 00:29:11.206 "core_mask": "0x2", 00:29:11.206 "workload": "randwrite", 00:29:11.206 "status": "finished", 00:29:11.206 "queue_depth": 16, 00:29:11.206 "io_size": 131072, 00:29:11.206 "runtime": 2.002922, 00:29:11.206 "iops": 4216.339927366118, 00:29:11.206 "mibps": 527.0424909207647, 00:29:11.206 "io_failed": 0, 00:29:11.206 "io_timeout": 0, 00:29:11.206 "avg_latency_us": 3790.680667850799, 00:29:11.206 "min_latency_us": 1208.32, 00:29:11.206 "max_latency_us": 12615.68 00:29:11.206 } 00:29:11.206 ], 00:29:11.206 "core_count": 1 00:29:11.206 } 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.206 | .driver_specific 00:29:11.206 | .nvme_error 00:29:11.206 | .status_code 00:29:11.206 | .command_transient_transport_error' 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 273 > 0 )) 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3835258 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3835258 ']' 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3835258 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.206 20:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3835258 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3835258' 00:29:11.466 killing process with pid 3835258 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3835258 00:29:11.466 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.466 00:29:11.466 Latency(us) 00:29:11.466 [2024-11-26T19:07:12.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.466 [2024-11-26T19:07:12.287Z] =================================================================================================================== 00:29:11.466 [2024-11-26T19:07:12.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3835258 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3832855 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3832855 ']' 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3832855 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832855 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832855' 00:29:11.466 killing process with pid 3832855 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3832855 00:29:11.466 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3832855 00:29:11.726 00:29:11.726 real 0m16.513s 00:29:11.726 user 0m32.702s 00:29:11.726 sys 0m3.649s 00:29:11.726 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.726 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.727 ************************************ 00:29:11.727 END TEST nvmf_digest_error 00:29:11.727 ************************************ 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.727 rmmod nvme_tcp 00:29:11.727 rmmod nvme_fabrics 00:29:11.727 rmmod nvme_keyring 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3832855 ']' 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3832855 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3832855 ']' 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3832855 00:29:11.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3832855) - No such process 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3832855 is not found' 00:29:11.727 Process with pid 3832855 is not found 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.273 00:29:14.273 real 0m43.521s 00:29:14.273 user 1m8.276s 00:29:14.273 sys 0m13.339s 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:14.273 ************************************ 00:29:14.273 END TEST nvmf_digest 00:29:14.273 ************************************ 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.273 ************************************ 00:29:14.273 START TEST nvmf_bdevperf 00:29:14.273 ************************************ 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.273 * Looking for test storage... 00:29:14.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.273 --rc genhtml_branch_coverage=1 00:29:14.273 --rc genhtml_function_coverage=1 00:29:14.273 --rc genhtml_legend=1 00:29:14.273 --rc geninfo_all_blocks=1 00:29:14.273 --rc geninfo_unexecuted_blocks=1 00:29:14.273 00:29:14.273 ' 00:29:14.273 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.273 --rc genhtml_branch_coverage=1 00:29:14.273 --rc genhtml_function_coverage=1 00:29:14.273 --rc genhtml_legend=1 00:29:14.273 --rc geninfo_all_blocks=1 00:29:14.273 --rc geninfo_unexecuted_blocks=1 00:29:14.273 00:29:14.273 ' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.274 --rc genhtml_branch_coverage=1 00:29:14.274 --rc genhtml_function_coverage=1 00:29:14.274 --rc genhtml_legend=1 00:29:14.274 --rc geninfo_all_blocks=1 00:29:14.274 --rc geninfo_unexecuted_blocks=1 00:29:14.274 00:29:14.274 ' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.274 --rc genhtml_branch_coverage=1 00:29:14.274 --rc genhtml_function_coverage=1 00:29:14.274 --rc genhtml_legend=1 00:29:14.274 --rc geninfo_all_blocks=1 00:29:14.274 --rc geninfo_unexecuted_blocks=1 00:29:14.274 00:29:14.274 ' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.274 20:07:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.408 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:22.409 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:22.409 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:22.409 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:22.409 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:29:22.409 00:29:22.409 --- 10.0.0.2 ping statistics --- 00:29:22.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.409 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:29:22.409 00:29:22.409 --- 10.0.0.1 ping statistics --- 00:29:22.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.409 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3840281 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3840281 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3840281 ']' 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.409 20:07:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.409 [2024-11-26 20:07:22.474779] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:22.409 [2024-11-26 20:07:22.474848] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.409 [2024-11-26 20:07:22.576126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.409 [2024-11-26 20:07:22.628016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.409 [2024-11-26 20:07:22.628069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.409 [2024-11-26 20:07:22.628077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.409 [2024-11-26 20:07:22.628085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.409 [2024-11-26 20:07:22.628092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.409 [2024-11-26 20:07:22.629974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.409 [2024-11-26 20:07:22.630202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.409 [2024-11-26 20:07:22.630260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 [2024-11-26 20:07:23.344047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 Malloc0 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 [2024-11-26 20:07:23.421549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.671 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.672 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.672 { 00:29:22.672 "params": { 00:29:22.672 "name": "Nvme$subsystem", 00:29:22.672 "trtype": "$TEST_TRANSPORT", 00:29:22.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.672 "adrfam": "ipv4", 00:29:22.672 "trsvcid": "$NVMF_PORT", 00:29:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.672 "hdgst": ${hdgst:-false}, 00:29:22.672 "ddgst": ${ddgst:-false} 00:29:22.672 }, 00:29:22.672 "method": "bdev_nvme_attach_controller" 00:29:22.672 } 00:29:22.672 EOF 00:29:22.672 )") 00:29:22.672 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:22.672 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:22.672 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:22.672 20:07:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.672 "params": { 00:29:22.672 "name": "Nvme1", 00:29:22.672 "trtype": "tcp", 00:29:22.672 "traddr": "10.0.0.2", 00:29:22.672 "adrfam": "ipv4", 00:29:22.672 "trsvcid": "4420", 00:29:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.672 "hdgst": false, 00:29:22.672 "ddgst": false 00:29:22.672 }, 00:29:22.672 "method": "bdev_nvme_attach_controller" 00:29:22.672 }' 00:29:22.672 [2024-11-26 20:07:23.481177] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:22.672 [2024-11-26 20:07:23.481247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840367 ] 00:29:22.933 [2024-11-26 20:07:23.575256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.933 [2024-11-26 20:07:23.628665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.194 Running I/O for 1 seconds... 00:29:24.579 8726.00 IOPS, 34.09 MiB/s 00:29:24.579 Latency(us) 00:29:24.579 [2024-11-26T19:07:25.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.579 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.579 Verification LBA range: start 0x0 length 0x4000 00:29:24.579 Nvme1n1 : 1.01 8739.33 34.14 0.00 0.00 14579.82 3003.73 12561.07 00:29:24.579 [2024-11-26T19:07:25.401Z] =================================================================================================================== 00:29:24.580 [2024-11-26T19:07:25.401Z] Total : 8739.33 34.14 0.00 0.00 14579.82 3003.73 12561.07 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3840673 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.580 { 00:29:24.580 "params": { 00:29:24.580 "name": "Nvme$subsystem", 00:29:24.580 "trtype": "$TEST_TRANSPORT", 00:29:24.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.580 "adrfam": "ipv4", 00:29:24.580 "trsvcid": "$NVMF_PORT", 00:29:24.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.580 "hdgst": ${hdgst:-false}, 00:29:24.580 "ddgst": ${ddgst:-false} 00:29:24.580 }, 00:29:24.580 "method": "bdev_nvme_attach_controller" 00:29:24.580 } 00:29:24.580 EOF 00:29:24.580 )") 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:24.580 20:07:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.580 "params": { 00:29:24.580 "name": "Nvme1", 00:29:24.580 "trtype": "tcp", 00:29:24.580 "traddr": "10.0.0.2", 00:29:24.580 "adrfam": "ipv4", 00:29:24.580 "trsvcid": "4420", 00:29:24.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.580 "hdgst": false, 00:29:24.580 "ddgst": false 00:29:24.580 }, 00:29:24.580 "method": "bdev_nvme_attach_controller" 00:29:24.580 }' 00:29:24.580 [2024-11-26 20:07:25.181539] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:24.580 [2024-11-26 20:07:25.181601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840673 ] 00:29:24.580 [2024-11-26 20:07:25.268705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.580 [2024-11-26 20:07:25.304299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.841 Running I/O for 15 seconds... 00:29:26.727 11140.00 IOPS, 43.52 MiB/s [2024-11-26T19:07:28.494Z] 11265.00 IOPS, 44.00 MiB/s [2024-11-26T19:07:28.494Z] 20:07:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3840281 00:29:27.673 20:07:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:27.673 [2024-11-26 20:07:28.145256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.673 [2024-11-26 20:07:28.145604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.673 [2024-11-26 20:07:28.145611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.145985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.145994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.674 [2024-11-26 20:07:28.146152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.674 [2024-11-26 20:07:28.146284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.674 [2024-11-26 20:07:28.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.675 [2024-11-26 20:07:28.146409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.675 [2024-11-26 20:07:28.146946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.675 [2024-11-26 20:07:28.146954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.146963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.146971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.146981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.146989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.146999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.676 [2024-11-26 20:07:28.147480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.147489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e61170 is same with the state(6) to be set 00:29:27.676 [2024-11-26 20:07:28.147498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:27.676 [2024-11-26 20:07:28.147504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:27.676 [2024-11-26 20:07:28.147510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113416 len:8 PRP1 0x0 PRP2 0x0 00:29:27.676 [2024-11-26 20:07:28.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.676 [2024-11-26 20:07:28.151163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.676 [2024-11-26 20:07:28.151213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.676 [2024-11-26 20:07:28.151956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.676 [2024-11-26 20:07:28.151972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.676 [2024-11-26 20:07:28.151981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.676 [2024-11-26 20:07:28.152205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.676 [2024-11-26 20:07:28.152425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.676 [2024-11-26 20:07:28.152434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.676 [2024-11-26 20:07:28.152442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.676 [2024-11-26 20:07:28.152455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.676 [2024-11-26 20:07:28.165345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.676 [2024-11-26 20:07:28.165953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.676 [2024-11-26 20:07:28.165992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.676 [2024-11-26 20:07:28.166004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.676 [2024-11-26 20:07:28.166252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.676 [2024-11-26 20:07:28.166476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.166485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.166493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.166501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.179206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.179879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.179919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.179930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.180178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.180402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.180411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.180419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.180427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.193102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.193756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.193795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.193807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.194045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.194276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.194286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.194295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.194303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.206977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.207657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.207699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.207710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.207950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.208183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.208193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.208201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.208209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.220887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.221453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.221475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.221483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.221702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.221920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.221928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.221935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.221942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.234827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.235395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.235414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.235422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.235640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.235858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.235865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.235873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.235880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.248752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.249322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.249341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.249354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.249573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.249791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.249799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.249806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.249813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.262693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.263265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.263314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.263327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.263574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.263797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.677 [2024-11-26 20:07:28.263806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.677 [2024-11-26 20:07:28.263815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.677 [2024-11-26 20:07:28.263823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.677 [2024-11-26 20:07:28.276545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.677 [2024-11-26 20:07:28.277246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.677 [2024-11-26 20:07:28.277297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.677 [2024-11-26 20:07:28.277311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.677 [2024-11-26 20:07:28.277559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.677 [2024-11-26 20:07:28.277787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.277796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.277804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.277813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.290540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.291263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.291322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.291335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.291586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.291819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.291828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.291836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.291845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.304361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.305046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.305108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.305121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.305387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.305616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.305625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.305633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.305642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.318141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.318853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.318916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.318929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.319196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.319424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.319433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.319442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.319451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.331958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.332706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.332768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.332780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.333034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.333273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.333284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.333292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.333310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.345827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.346526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.346589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.346602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.346856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.347082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.347091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.347100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.347109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.359631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.360342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.360355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.360608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.360834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.360843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.360852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.360861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.373596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.374274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.374338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.374352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.374607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.374847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.374858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.374867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.374876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.387396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.388116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.388129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.388409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.388637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.388646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.388655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.388664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.401370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.402080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.402144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.402171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.402427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.402653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.678 [2024-11-26 20:07:28.402663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.678 [2024-11-26 20:07:28.402671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.678 [2024-11-26 20:07:28.402681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.678 [2024-11-26 20:07:28.415212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.678 [2024-11-26 20:07:28.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.678 [2024-11-26 20:07:28.416023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.678 [2024-11-26 20:07:28.416036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.678 [2024-11-26 20:07:28.416306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.678 [2024-11-26 20:07:28.416533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.416544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.416552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.416561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.679 [2024-11-26 20:07:28.429071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.679 [2024-11-26 20:07:28.429800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.679 [2024-11-26 20:07:28.429863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.679 [2024-11-26 20:07:28.429886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.679 [2024-11-26 20:07:28.430140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.679 [2024-11-26 20:07:28.430379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.430389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.430400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.430411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.679 [2024-11-26 20:07:28.442926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.679 [2024-11-26 20:07:28.443623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.679 [2024-11-26 20:07:28.443685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.679 [2024-11-26 20:07:28.443698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.679 [2024-11-26 20:07:28.443952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.679 [2024-11-26 20:07:28.444194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.444204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.444212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.444221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.679 [2024-11-26 20:07:28.456731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.679 [2024-11-26 20:07:28.457317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.679 [2024-11-26 20:07:28.457347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.679 [2024-11-26 20:07:28.457356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.679 [2024-11-26 20:07:28.457578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.679 [2024-11-26 20:07:28.457799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.457808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.457816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.457824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.679 10161.33 IOPS, 39.69 MiB/s [2024-11-26T19:07:28.500Z] [2024-11-26 20:07:28.470545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.679 [2024-11-26 20:07:28.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.679 [2024-11-26 20:07:28.471291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.679 [2024-11-26 20:07:28.471306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.679 [2024-11-26 20:07:28.471561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.679 [2024-11-26 20:07:28.471795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.471804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.471813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.471822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.679 [2024-11-26 20:07:28.484365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.679 [2024-11-26 20:07:28.484988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.679 [2024-11-26 20:07:28.485050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.679 [2024-11-26 20:07:28.485063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.679 [2024-11-26 20:07:28.485332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.679 [2024-11-26 20:07:28.485559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.679 [2024-11-26 20:07:28.485568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.679 [2024-11-26 20:07:28.485577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.679 [2024-11-26 20:07:28.485586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.942 [2024-11-26 20:07:28.498332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.942 [2024-11-26 20:07:28.499053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.942 [2024-11-26 20:07:28.499114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.942 [2024-11-26 20:07:28.499127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.942 [2024-11-26 20:07:28.499396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.942 [2024-11-26 20:07:28.499624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.942 [2024-11-26 20:07:28.499634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.942 [2024-11-26 20:07:28.499643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.942 [2024-11-26 20:07:28.499652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.942 [2024-11-26 20:07:28.512155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.942 [2024-11-26 20:07:28.512848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.942 [2024-11-26 20:07:28.512909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.942 [2024-11-26 20:07:28.512922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.942 [2024-11-26 20:07:28.513191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.942 [2024-11-26 20:07:28.513418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.942 [2024-11-26 20:07:28.513428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.942 [2024-11-26 20:07:28.513446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.942 [2024-11-26 20:07:28.513455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.942 [2024-11-26 20:07:28.525959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.942 [2024-11-26 20:07:28.526694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.942 [2024-11-26 20:07:28.526756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.942 [2024-11-26 20:07:28.526769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.942 [2024-11-26 20:07:28.527022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.942 [2024-11-26 20:07:28.527262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.942 [2024-11-26 20:07:28.527273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.942 [2024-11-26 20:07:28.527282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.942 [2024-11-26 20:07:28.527291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.942 [2024-11-26 20:07:28.539840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.942 [2024-11-26 20:07:28.540590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.942 [2024-11-26 20:07:28.540653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.942 [2024-11-26 20:07:28.540667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.942 [2024-11-26 20:07:28.540921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.942 [2024-11-26 20:07:28.541147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.942 [2024-11-26 20:07:28.541157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.942 [2024-11-26 20:07:28.541180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.942 [2024-11-26 20:07:28.541189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.553694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.554482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.554544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.554557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.554811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.555037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.555046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.555055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.555064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.567606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.568265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.568329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.568344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.568598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.568824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.568834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.568842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.568851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.581855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.582610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.582685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.582939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.583180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.583191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.583200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.583210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.595748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.596485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.596549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.596561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.596816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.597041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.597051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.597060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.597069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.609595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.610275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.610338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.610368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.610623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.610850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.610861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.610870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.610879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.623402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.624040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.624070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.624080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.624312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.624534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.624544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.624553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.624561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.637266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.637876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.637900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.637908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.638127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.638357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.638371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.638379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.638386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.651104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.651818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.651882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.651895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.652149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.652394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.652405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.652414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.652423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.664942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.665483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.665513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.665524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.665748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.943 [2024-11-26 20:07:28.665968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.943 [2024-11-26 20:07:28.665979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.943 [2024-11-26 20:07:28.665987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.943 [2024-11-26 20:07:28.665995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.943 [2024-11-26 20:07:28.678894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.943 [2024-11-26 20:07:28.679683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.943 [2024-11-26 20:07:28.679746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.943 [2024-11-26 20:07:28.679759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.943 [2024-11-26 20:07:28.680013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.680250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.680260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.680269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.680278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.944 [2024-11-26 20:07:28.692830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.944 [2024-11-26 20:07:28.693577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.944 [2024-11-26 20:07:28.693640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.944 [2024-11-26 20:07:28.693653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.944 [2024-11-26 20:07:28.693908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.694134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.694144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.694172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.694182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.944 [2024-11-26 20:07:28.706691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.944 [2024-11-26 20:07:28.707314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.944 [2024-11-26 20:07:28.707378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.944 [2024-11-26 20:07:28.707393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.944 [2024-11-26 20:07:28.707647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.707874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.707883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.707892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.707901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.944 [2024-11-26 20:07:28.720623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.944 [2024-11-26 20:07:28.721284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.944 [2024-11-26 20:07:28.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.944 [2024-11-26 20:07:28.721359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.944 [2024-11-26 20:07:28.721612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.721839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.721854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.721864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.721873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.944 [2024-11-26 20:07:28.734617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.944 [2024-11-26 20:07:28.735101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.944 [2024-11-26 20:07:28.735134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.944 [2024-11-26 20:07:28.735143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.944 [2024-11-26 20:07:28.735376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.735600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.735609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.735617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.735625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.944 [2024-11-26 20:07:28.748572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.944 [2024-11-26 20:07:28.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.944 [2024-11-26 20:07:28.749322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:27.944 [2024-11-26 20:07:28.749337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:27.944 [2024-11-26 20:07:28.749592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:27.944 [2024-11-26 20:07:28.749819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.944 [2024-11-26 20:07:28.749830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.944 [2024-11-26 20:07:28.749839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.944 [2024-11-26 20:07:28.749848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.762400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.762815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.762845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.762856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.763077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.763309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.763319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.763328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.763336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.776268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.776921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.776984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.776997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.777265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.777493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.777503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.777512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.777521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.789018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.789624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.789650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.789664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.789818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.789971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.789977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.789983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.789989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.801721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.802236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.802256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.802262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.802415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.802566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.802573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.802580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.802586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.814424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.814993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.815042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.815051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.815239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.815396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.815402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.815409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.815415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.827120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.827665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.827710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.827719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.827894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.828055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.828063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.828069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.828075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.839786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.840322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.840365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.840375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.840549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.840705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.840712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.840718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.840724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.852432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.853082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.853122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.853131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.853314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.208 [2024-11-26 20:07:28.853470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.208 [2024-11-26 20:07:28.853476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.208 [2024-11-26 20:07:28.853482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.208 [2024-11-26 20:07:28.853489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.208 [2024-11-26 20:07:28.865043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.208 [2024-11-26 20:07:28.865612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.208 [2024-11-26 20:07:28.865631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.208 [2024-11-26 20:07:28.865636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.208 [2024-11-26 20:07:28.865788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.865938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.865944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.865955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.865960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.877659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.878262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.878298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.878308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.878481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.878635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.878642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.878648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.878654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.890362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.890937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.890973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.890982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.891151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.891312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.891320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.891325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.891331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.903021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.903667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.903701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.903710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.903878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.904031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.904037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.904045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.904053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.915758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.916264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.916298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.916308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.916479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.916632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.916639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.916644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.916651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.928488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.928990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.929005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.929010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.929167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.929319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.929324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.929329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.929334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.941151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.941683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.941714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.941722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.941891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.942044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.942050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.942056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.942061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.953757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.954282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.954432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.954582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.954588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.954593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.954598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.966411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.966922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.966935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.966940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.967089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.967244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.967250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.967256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.967260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.979082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.979683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.979713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.209 [2024-11-26 20:07:28.979722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.209 [2024-11-26 20:07:28.979890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.209 [2024-11-26 20:07:28.980043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.209 [2024-11-26 20:07:28.980049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.209 [2024-11-26 20:07:28.980055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.209 [2024-11-26 20:07:28.980061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.209 [2024-11-26 20:07:28.991756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.209 [2024-11-26 20:07:28.992413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.209 [2024-11-26 20:07:28.992444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.210 [2024-11-26 20:07:28.992452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.210 [2024-11-26 20:07:28.992621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.210 [2024-11-26 20:07:28.992777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.210 [2024-11-26 20:07:28.992783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.210 [2024-11-26 20:07:28.992789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.210 [2024-11-26 20:07:28.992795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.210 [2024-11-26 20:07:29.004483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.210 [2024-11-26 20:07:29.005049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.210 [2024-11-26 20:07:29.005079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.210 [2024-11-26 20:07:29.005087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.210 [2024-11-26 20:07:29.005262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.210 [2024-11-26 20:07:29.005416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.210 [2024-11-26 20:07:29.005422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.210 [2024-11-26 20:07:29.005427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.210 [2024-11-26 20:07:29.005433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.210 [2024-11-26 20:07:29.017118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.210 [2024-11-26 20:07:29.017624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.210 [2024-11-26 20:07:29.017639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.210 [2024-11-26 20:07:29.017645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.210 [2024-11-26 20:07:29.017794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.210 [2024-11-26 20:07:29.017944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.210 [2024-11-26 20:07:29.017950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.210 [2024-11-26 20:07:29.017955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.210 [2024-11-26 20:07:29.017959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.472 [2024-11-26 20:07:29.029780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.030376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.030405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.030414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.030583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.030736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.030742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.030752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.030757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.042451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.043011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.043041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.043050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.043222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.043376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.043382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.043387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.043393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.055068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.055671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.055701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.055710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.055875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.056028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.056035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.056040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.056046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.067738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.068271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.068301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.068310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.068478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.068631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.068638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.068644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.068650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.080348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.080742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.080757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.080762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.080912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.081062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.081068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.081073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.081078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.093049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.093437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.093450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.093456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.093605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.093754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.093760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.093765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.093770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.105729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.106175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.106188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.106194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.106344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.106493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.106499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.106505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.106510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.118458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.119038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.119068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.119080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.119251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.119405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.119411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.119416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.119422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.131100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.131630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.131645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.131650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.131800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.131950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.131956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.131962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.131967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.143790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.144195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.144209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.144214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.144363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.144513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.144519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.144524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.144529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.156482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.157047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.157076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.157085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.157257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.157416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.157423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.157429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.157435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.169124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.169724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.169755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.169764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.169929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.170083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.170091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.170096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.170102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.181757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.182210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.182231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.182237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.182393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.182544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.182550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.182555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.182560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.194382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.473 [2024-11-26 20:07:29.194848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.473 [2024-11-26 20:07:29.194862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.473 [2024-11-26 20:07:29.194867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.473 [2024-11-26 20:07:29.195016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.473 [2024-11-26 20:07:29.195170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.473 [2024-11-26 20:07:29.195176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.473 [2024-11-26 20:07:29.195184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.473 [2024-11-26 20:07:29.195189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.473 [2024-11-26 20:07:29.207000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.207559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.207589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.207597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.207765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.207919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.207925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.207931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.207936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.219627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.220188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.220218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.220227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.220392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.220545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.220551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.220557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.220563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.232263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.232749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.232763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.232769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.232919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.233069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.233074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.233079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.233084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.244920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.245486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.245516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.245525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.245690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.245844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.245850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.245856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.245862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.257553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.258034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.258048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.258054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.258207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.258365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.258371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.258376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.258381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.270203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.270753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.270783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.270792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.270957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.271110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.271116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.271122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.271127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.474 [2024-11-26 20:07:29.282840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.474 [2024-11-26 20:07:29.283313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.474 [2024-11-26 20:07:29.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.474 [2024-11-26 20:07:29.283336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.474 [2024-11-26 20:07:29.283487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.474 [2024-11-26 20:07:29.283637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.474 [2024-11-26 20:07:29.283642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.474 [2024-11-26 20:07:29.283648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.474 [2024-11-26 20:07:29.283652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.736 [2024-11-26 20:07:29.295499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.736 [2024-11-26 20:07:29.295980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.736 [2024-11-26 20:07:29.295994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.736 [2024-11-26 20:07:29.296000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.736 [2024-11-26 20:07:29.296149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.736 [2024-11-26 20:07:29.296304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.736 [2024-11-26 20:07:29.296310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.736 [2024-11-26 20:07:29.296316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.736 [2024-11-26 20:07:29.296321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.736 [2024-11-26 20:07:29.308192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.736 [2024-11-26 20:07:29.308674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.736 [2024-11-26 20:07:29.308686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.308692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.308842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.308991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.308997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.309001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.309006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.320844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.321427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.321458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.321467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.321632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.321789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.321797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.321802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.321808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.333495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.333979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.333994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.333999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.334150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.334304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.334311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.334316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.334321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.346125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.346690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.346721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.346729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.346895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.347048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.347054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.347059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.347065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.358749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.359319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.359349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.359358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.359523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.359676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.359682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.359688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.359697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.371386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.371953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.371983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.371992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.372165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.372319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.372325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.372331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.372336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.384026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.384505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.384520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.384525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.384676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.384825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.384831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.384836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.384841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.396668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.397227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.397257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.397266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.397431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.397584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.397590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.397596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.397602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.409288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.409888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.409918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.409927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.410093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.410255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.410262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.410268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.410274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.421968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.737 [2024-11-26 20:07:29.422453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.737 [2024-11-26 20:07:29.422468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.737 [2024-11-26 20:07:29.422474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.737 [2024-11-26 20:07:29.422625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.737 [2024-11-26 20:07:29.422775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.737 [2024-11-26 20:07:29.422781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.737 [2024-11-26 20:07:29.422786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.737 [2024-11-26 20:07:29.422791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.737 [2024-11-26 20:07:29.434605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.435065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.435105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.435280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.435435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.435441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.435446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.435452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.447274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.447749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.447764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.447773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.447923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.448072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.448078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.448083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.448088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.459903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.460460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.460490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.460499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.460664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.460817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.460823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.460829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.460835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 7621.00 IOPS, 29.77 MiB/s [2024-11-26T19:07:29.559Z] [2024-11-26 20:07:29.472527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.473096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.473127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.473136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.473308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.473462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.473469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.473474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.473479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.485169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.485716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.485746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.485754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.485920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.486077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.486083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.486089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.486094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.497788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.498392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.498423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.498431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.498596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.498749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.498756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.498761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.498767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.510453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.511022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.511052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.511061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.511233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.511387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.511393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.511399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.511405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.523076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.523656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.523686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.523695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.523860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.524013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.524020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.524029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.524035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.535719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.536359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.536389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.536398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.536563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.536716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.536722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.738 [2024-11-26 20:07:29.536728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.738 [2024-11-26 20:07:29.536734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.738 [2024-11-26 20:07:29.548426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.738 [2024-11-26 20:07:29.548987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.738 [2024-11-26 20:07:29.549017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:28.738 [2024-11-26 20:07:29.549026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:28.738 [2024-11-26 20:07:29.549200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:28.738 [2024-11-26 20:07:29.549354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.738 [2024-11-26 20:07:29.549360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.739 [2024-11-26 20:07:29.549365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.739 [2024-11-26 20:07:29.549371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.002 [2024-11-26 20:07:29.561053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.002 [2024-11-26 20:07:29.561613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.002 [2024-11-26 20:07:29.561644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.002 [2024-11-26 20:07:29.561653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.002 [2024-11-26 20:07:29.561821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.002 [2024-11-26 20:07:29.561974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.002 [2024-11-26 20:07:29.561980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.002 [2024-11-26 20:07:29.561986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.002 [2024-11-26 20:07:29.561992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.002 [2024-11-26 20:07:29.573676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.002 [2024-11-26 20:07:29.574264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.002 [2024-11-26 20:07:29.574295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.002 [2024-11-26 20:07:29.574303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.002 [2024-11-26 20:07:29.574469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.002 [2024-11-26 20:07:29.574622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.002 [2024-11-26 20:07:29.574628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.002 [2024-11-26 20:07:29.574634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.002 [2024-11-26 20:07:29.574640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.002 [2024-11-26 20:07:29.586359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.002 [2024-11-26 20:07:29.586834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.002 [2024-11-26 20:07:29.586864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.002 [2024-11-26 20:07:29.586872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.002 [2024-11-26 20:07:29.587038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.002 [2024-11-26 20:07:29.587198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.002 [2024-11-26 20:07:29.587205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.002 [2024-11-26 20:07:29.587211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.002 [2024-11-26 20:07:29.587216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.002 [2024-11-26 20:07:29.599042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.002 [2024-11-26 20:07:29.599591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.002 [2024-11-26 20:07:29.599622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.002 [2024-11-26 20:07:29.599630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.002 [2024-11-26 20:07:29.599796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.002 [2024-11-26 20:07:29.599949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.002 [2024-11-26 20:07:29.599955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.002 [2024-11-26 20:07:29.599960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.002 [2024-11-26 20:07:29.599966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.002 [2024-11-26 20:07:29.611647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.002 [2024-11-26 20:07:29.612102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.002 [2024-11-26 20:07:29.612117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.002 [2024-11-26 20:07:29.612127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.002 [2024-11-26 20:07:29.612282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.002 [2024-11-26 20:07:29.612433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.002 [2024-11-26 20:07:29.612438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.612443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.612448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.624276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.624702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.624720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.624870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.625019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.625025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.625030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.625035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.636994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.637534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.637564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.637572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.637738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.637891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.637897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.637903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.637909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.649590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.650086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.650101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.650107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.650261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.650415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.650421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.650426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.650431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.662228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.662578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.662592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.662598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.662748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.662898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.662904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.662909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.662914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.674896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.675478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.675508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.675517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.675683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.675836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.675843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.675849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.675855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.687555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.688033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.688048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.688053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.688208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.688359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.688365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.688373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.688379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.700276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.700826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.700857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.700865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.701030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.701191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.701198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.701204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.701209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.712887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.713445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.713476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.713484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.713650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.713802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.713808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.713814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.713820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.725502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.726068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.726098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.726107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.726283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.726437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.003 [2024-11-26 20:07:29.726443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.003 [2024-11-26 20:07:29.726449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.003 [2024-11-26 20:07:29.726455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.003 [2024-11-26 20:07:29.738122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.003 [2024-11-26 20:07:29.738687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.003 [2024-11-26 20:07:29.738717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.003 [2024-11-26 20:07:29.738725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.003 [2024-11-26 20:07:29.738891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.003 [2024-11-26 20:07:29.739044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.739050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.739056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.739061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.750743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.751083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.751098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.751104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.751259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.751409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.751415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.751420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.751425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.763397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.763867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.763879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.763885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.764034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.764188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.764194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.764199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.764204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.776035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.776484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.776497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.776506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.776656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.776807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.776812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.776817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.776822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.788642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.789112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.789124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.789129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.789284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.789434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.789440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.789445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.789450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.801288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.801851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.801881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.801890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.802056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.802216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.802223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.802229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.802234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.004 [2024-11-26 20:07:29.813929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.004 [2024-11-26 20:07:29.814531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.004 [2024-11-26 20:07:29.814561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.004 [2024-11-26 20:07:29.814570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.004 [2024-11-26 20:07:29.814735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.004 [2024-11-26 20:07:29.814892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.004 [2024-11-26 20:07:29.814899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.004 [2024-11-26 20:07:29.814904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.004 [2024-11-26 20:07:29.814910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.267 [2024-11-26 20:07:29.826594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.267 [2024-11-26 20:07:29.827164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-11-26 20:07:29.827194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.267 [2024-11-26 20:07:29.827203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.267 [2024-11-26 20:07:29.827368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.267 [2024-11-26 20:07:29.827521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.267 [2024-11-26 20:07:29.827528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.267 [2024-11-26 20:07:29.827533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.267 [2024-11-26 20:07:29.827539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.267 [2024-11-26 20:07:29.839216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.267 [2024-11-26 20:07:29.839710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-11-26 20:07:29.839724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.267 [2024-11-26 20:07:29.839730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.267 [2024-11-26 20:07:29.839880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.267 [2024-11-26 20:07:29.840030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.267 [2024-11-26 20:07:29.840036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.267 [2024-11-26 20:07:29.840041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.267 [2024-11-26 20:07:29.840046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.267 [2024-11-26 20:07:29.851858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.267 [2024-11-26 20:07:29.852405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.267 [2024-11-26 20:07:29.852436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.267 [2024-11-26 20:07:29.852444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.267 [2024-11-26 20:07:29.852609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.267 [2024-11-26 20:07:29.852762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.267 [2024-11-26 20:07:29.852769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.267 [2024-11-26 20:07:29.852782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.852788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.864483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.865031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.865061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.865069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.865243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.865397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.865404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.865410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.865415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.877100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.877667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.877706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.877871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.878024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.878030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.878036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.878042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.889767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.890333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.890363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.890372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.890538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.890691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.890697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.890703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.890708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.902401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.902984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.903014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.903023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.903194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.903348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.903354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.903359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.903365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.915047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.915644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.915674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.915683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.915848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.916001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.916007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.916013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.916020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.927722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.928203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.928218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.928224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.928375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.928526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.928531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.928537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.928541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.940371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.940799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.940828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.940840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.941007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.941167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.941174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.941179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.941185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.953056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.953692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.953723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.953731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.953897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.954050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.954056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.954062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.954067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.965740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.966326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.966356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.966365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.966530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.966684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.966690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.268 [2024-11-26 20:07:29.966695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.268 [2024-11-26 20:07:29.966701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.268 [2024-11-26 20:07:29.978381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.268 [2024-11-26 20:07:29.978880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.268 [2024-11-26 20:07:29.978894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.268 [2024-11-26 20:07:29.978900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.268 [2024-11-26 20:07:29.979049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.268 [2024-11-26 20:07:29.979208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.268 [2024-11-26 20:07:29.979214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:29.979219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:29.979224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:29.991043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:29.991634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:29.991664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:29.991672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:29.991837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:29.991990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:29.991997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:29.992002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:29.992008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.004187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.004764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.004795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.004804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.004970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.005125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.005132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.005137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.005144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.016844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.017216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.017238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.017391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.017541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.017547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.017556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.017561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.029544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.030039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.030044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.030199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.030350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.030356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.030361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.030366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.042184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.042742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.042752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.042924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.043086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.043094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.043101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.043109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.054821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.055444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.055474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.055483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.055648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.055801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.055807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.055813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.055820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.067505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.068120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.068150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.068166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.068334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.068488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.068494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.068500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.068506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.269 [2024-11-26 20:07:30.080189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.269 [2024-11-26 20:07:30.080764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.269 [2024-11-26 20:07:30.080795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.269 [2024-11-26 20:07:30.080804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.269 [2024-11-26 20:07:30.080969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.269 [2024-11-26 20:07:30.081122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.269 [2024-11-26 20:07:30.081128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.269 [2024-11-26 20:07:30.081134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.269 [2024-11-26 20:07:30.081140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.533 [2024-11-26 20:07:30.092829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.533 [2024-11-26 20:07:30.093416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-26 20:07:30.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.533 [2024-11-26 20:07:30.093456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.533 [2024-11-26 20:07:30.093621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.533 [2024-11-26 20:07:30.093774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.533 [2024-11-26 20:07:30.093780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.533 [2024-11-26 20:07:30.093786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.533 [2024-11-26 20:07:30.093793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.533 [2024-11-26 20:07:30.105499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.533 [2024-11-26 20:07:30.106068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-26 20:07:30.106098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.533 [2024-11-26 20:07:30.106112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.533 [2024-11-26 20:07:30.106289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.533 [2024-11-26 20:07:30.106443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.533 [2024-11-26 20:07:30.106450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.533 [2024-11-26 20:07:30.106455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.533 [2024-11-26 20:07:30.106461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.533 [2024-11-26 20:07:30.118147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.533 [2024-11-26 20:07:30.118721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-26 20:07:30.118751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.533 [2024-11-26 20:07:30.118760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.533 [2024-11-26 20:07:30.118926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.533 [2024-11-26 20:07:30.119079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.533 [2024-11-26 20:07:30.119085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.533 [2024-11-26 20:07:30.119091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.533 [2024-11-26 20:07:30.119097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.533 [2024-11-26 20:07:30.130801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.533 [2024-11-26 20:07:30.131405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-26 20:07:30.131435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.533 [2024-11-26 20:07:30.131444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.533 [2024-11-26 20:07:30.131610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.533 [2024-11-26 20:07:30.131763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.533 [2024-11-26 20:07:30.131769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.533 [2024-11-26 20:07:30.131775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.533 [2024-11-26 20:07:30.131782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.533 [2024-11-26 20:07:30.143474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.533 [2024-11-26 20:07:30.144046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-11-26 20:07:30.144076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.533 [2024-11-26 20:07:30.144085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.533 [2024-11-26 20:07:30.144258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.533 [2024-11-26 20:07:30.144416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.144422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.144428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.144434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.156116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.156681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.156721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.156889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.157042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.157049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.157054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.157060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.168749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.169355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.169385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.169394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.169559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.169713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.169720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.169726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.169732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.181447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.182007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.182037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.182047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.182219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.182373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.182379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.182388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.182394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.194090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.194583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.194598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.194603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.194754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.194903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.194909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.194914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.194919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.206753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.207193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.207206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.207212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.207361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.207511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.207517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.207522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.207526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.219444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.220027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.220058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.220066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.220239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.220393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.220399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.220405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.220410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.232104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.232580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.232595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.232601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.232751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.232900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.232906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.232912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.232916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.244737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.245265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.245295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.245304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.245473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.245626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.245632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.245638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.245644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.257330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.257927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.257957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.257966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.258131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.258290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.534 [2024-11-26 20:07:30.258297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.534 [2024-11-26 20:07:30.258302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.534 [2024-11-26 20:07:30.258308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.534 [2024-11-26 20:07:30.269986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.534 [2024-11-26 20:07:30.270567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-11-26 20:07:30.270597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.534 [2024-11-26 20:07:30.270608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.534 [2024-11-26 20:07:30.270774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.534 [2024-11-26 20:07:30.270927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.270933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.270939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.270944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.282651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.283263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.283302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.283467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.283621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.283627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.283632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.283638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.295341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.295906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.295935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.295944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.296109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.296270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.296277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.296282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.296288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.307952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.308563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.308594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.308603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.308768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.308925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.308931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.308937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.308943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.320635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.321219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.321250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.321259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.321427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.321580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.321586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.321591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.321597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.333284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.333857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.333887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.333896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.334061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.334222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.334230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.334235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.334241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.535 [2024-11-26 20:07:30.345923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.535 [2024-11-26 20:07:30.346331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-11-26 20:07:30.346361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.535 [2024-11-26 20:07:30.346370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.535 [2024-11-26 20:07:30.346538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.535 [2024-11-26 20:07:30.346691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.535 [2024-11-26 20:07:30.346697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.535 [2024-11-26 20:07:30.346707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.535 [2024-11-26 20:07:30.346712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.799 [2024-11-26 20:07:30.358544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.359088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.359118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.359127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.359298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.359452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.359458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.359464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.359470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.371157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.371729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.371737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.371903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.372055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.372063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.372068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.372073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.383776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.384278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.384309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.384317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.384486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.384638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.384645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.384650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.384656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.396512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.397083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.397114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.397122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.397297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.397452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.397458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.397464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.397469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.409153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.409653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.409673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.409823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.409973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.409979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.409984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.409989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.421803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.422397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.422428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.422437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.422602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.422755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.422762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.422767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.422773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.434461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.435012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.435042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.435055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.435227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.435382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.435388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.435394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.435399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.447105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.447583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.447604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.447754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.447903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.447909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.447914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.447919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 [2024-11-26 20:07:30.459749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.460225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.460239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.460244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.460394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.460544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.460549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.460554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.460559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.800 6096.80 IOPS, 23.82 MiB/s [2024-11-26T19:07:30.621Z] [2024-11-26 20:07:30.472383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.800 [2024-11-26 20:07:30.472962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.800 [2024-11-26 20:07:30.472992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.800 [2024-11-26 20:07:30.473001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.800 [2024-11-26 20:07:30.473173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.800 [2024-11-26 20:07:30.473334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.800 [2024-11-26 20:07:30.473340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.800 [2024-11-26 20:07:30.473346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.800 [2024-11-26 20:07:30.473351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.485040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.485599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.485614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.485620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.485770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.485920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.485925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.485931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.485935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.497769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.498261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.498274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.498279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.498429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.498578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.498584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.498589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.498593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.510418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.510889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.510901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.510906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.511055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.511209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.511216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.511224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.511228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.523049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.523478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.523491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.523496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.523645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.523795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.523800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.523805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.523810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.535772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.536253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.536265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.536270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.536420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.536569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.536575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.536580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.536585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.548402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.548887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.548899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.548904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.549053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.549206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.549213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.549218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.549223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.561042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.561598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.561628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.561637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.561803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.561956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.561962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.561967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.561973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.573666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.574141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.574155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.574166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.574317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.574466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.574473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.574478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.574483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.586335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.586873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.586903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.586912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.587077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.587236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.587244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.587249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.587255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.801 [2024-11-26 20:07:30.598947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.801 [2024-11-26 20:07:30.599344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.801 [2024-11-26 20:07:30.599360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.801 [2024-11-26 20:07:30.599369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.801 [2024-11-26 20:07:30.599519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.801 [2024-11-26 20:07:30.599669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.801 [2024-11-26 20:07:30.599675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.801 [2024-11-26 20:07:30.599680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.801 [2024-11-26 20:07:30.599685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.802 [2024-11-26 20:07:30.611648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.802 [2024-11-26 20:07:30.612733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-11-26 20:07:30.612754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-11-26 20:07:30.612760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:29.802 [2024-11-26 20:07:30.612917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:29.802 [2024-11-26 20:07:30.613069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.802 [2024-11-26 20:07:30.613075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.802 [2024-11-26 20:07:30.613080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.802 [2024-11-26 20:07:30.613085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.624349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.624807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.624819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.624825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.624974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.625125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.625130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.625135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.625140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.636964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.637432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.637463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.637472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.637640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.637797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.637804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.637809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.637815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.649643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.650124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.650138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.650144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.650346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.650497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.650503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.650508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.650513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.662330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.662764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.662794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.662802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.662968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.663120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.663126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.663132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.663138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.674980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.675575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.675606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.675615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.675780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.675934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.675941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.675951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.675957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.687660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.688376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.688407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.688416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.688582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.688735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.688741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.688746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.688752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.700323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.700791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.700806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-11-26 20:07:30.700811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.066 [2024-11-26 20:07:30.700961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.066 [2024-11-26 20:07:30.701111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.066 [2024-11-26 20:07:30.701117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.066 [2024-11-26 20:07:30.701122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.066 [2024-11-26 20:07:30.701127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.066 [2024-11-26 20:07:30.712946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.066 [2024-11-26 20:07:30.713543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-11-26 20:07:30.713574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.713582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.713748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.713900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.713907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.713912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.713918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.725607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.726231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.726262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.726270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.726438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.726591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.726598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.726603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.726609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.738309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.738877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.738907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.738916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.739081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.739241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.739248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.739253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.739259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.750948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.751401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.751417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.751422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.751572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.751722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.751728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.751733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.751738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.763559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.764040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.764053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.764062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.764215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.764366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.764371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.764376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.764381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.776202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.776750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.776780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.776789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.776954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.777106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.777113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.777118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.777124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.788824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.789308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.789323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.789328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.789479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.789628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.789635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.789640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.789645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.801476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.801915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.801928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.801933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.802082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.802240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.802247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.802252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.802257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.814072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.814587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.814600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.814605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.814755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.814905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.814911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.814916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.814921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.826737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.067 [2024-11-26 20:07:30.827088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-11-26 20:07:30.827100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-11-26 20:07:30.827105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.067 [2024-11-26 20:07:30.827258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.067 [2024-11-26 20:07:30.827408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.067 [2024-11-26 20:07:30.827413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.067 [2024-11-26 20:07:30.827418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.067 [2024-11-26 20:07:30.827423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.067 [2024-11-26 20:07:30.839377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.068 [2024-11-26 20:07:30.839758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-11-26 20:07:30.839770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.068 [2024-11-26 20:07:30.839775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.068 [2024-11-26 20:07:30.839924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.068 [2024-11-26 20:07:30.840074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.068 [2024-11-26 20:07:30.840079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.068 [2024-11-26 20:07:30.840088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.068 [2024-11-26 20:07:30.840092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.068 [2024-11-26 20:07:30.852042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.068 [2024-11-26 20:07:30.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-11-26 20:07:30.852508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.068 [2024-11-26 20:07:30.852513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.068 [2024-11-26 20:07:30.852662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.068 [2024-11-26 20:07:30.852812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.068 [2024-11-26 20:07:30.852817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.068 [2024-11-26 20:07:30.852822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.068 [2024-11-26 20:07:30.852827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.068 [2024-11-26 20:07:30.864637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.068 [2024-11-26 20:07:30.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-11-26 20:07:30.865019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.068 [2024-11-26 20:07:30.865024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.068 [2024-11-26 20:07:30.865178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.068 [2024-11-26 20:07:30.865328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.068 [2024-11-26 20:07:30.865333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.068 [2024-11-26 20:07:30.865338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.068 [2024-11-26 20:07:30.865343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.068 [2024-11-26 20:07:30.877302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.068 [2024-11-26 20:07:30.877753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-11-26 20:07:30.877765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.068 [2024-11-26 20:07:30.877770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.068 [2024-11-26 20:07:30.877919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.068 [2024-11-26 20:07:30.878069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.068 [2024-11-26 20:07:30.878075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.068 [2024-11-26 20:07:30.878079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.068 [2024-11-26 20:07:30.878084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.889907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.890472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.890486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.890492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.890642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.890792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.890797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.890802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.890807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.902641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.903140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.903145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.903298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.903449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.903455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.903460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.903465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.915276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.915851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.915881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.915890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.916056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.916215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.916223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.916228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.916234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.927918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.928375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.928390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.928400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.928551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.928701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.928707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.928713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.928718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.940544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.940922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.940935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.940940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.941090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.941244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.941250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.941255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.941260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.953224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.953567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.953580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.953585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.953736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.953885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.953891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.953897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.953901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.965851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.331 [2024-11-26 20:07:30.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-26 20:07:30.966466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.331 [2024-11-26 20:07:30.966474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.331 [2024-11-26 20:07:30.966640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.331 [2024-11-26 20:07:30.966797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.331 [2024-11-26 20:07:30.966803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.331 [2024-11-26 20:07:30.966809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.331 [2024-11-26 20:07:30.966815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.331 [2024-11-26 20:07:30.978534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:30.979111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:30.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:30.979151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:30.979325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:30.979478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:30.979484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:30.979490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:30.979495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:30.991182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:30.991757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:30.991787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:30.991795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:30.991961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:30.992114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:30.992120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:30.992126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:30.992131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.003832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.004426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.004457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.004466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.004631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.004784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.004791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.004800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.004806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.016494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.016978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.016993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.016998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.017148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.017303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.017309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.017314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.017319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.029124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.029385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.029397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.029403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.029552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.029701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.029707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.029712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.029716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.041814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.042186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.042199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.042205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.042354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.042504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.042510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.042514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.042519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.054476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.054933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.054946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.054951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.055100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.055255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.055261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.055267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.055271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.332 [2024-11-26 20:07:31.067070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.332 [2024-11-26 20:07:31.067693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-26 20:07:31.067723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.332 [2024-11-26 20:07:31.067732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.332 [2024-11-26 20:07:31.067898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.332 [2024-11-26 20:07:31.068051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.332 [2024-11-26 20:07:31.068058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.332 [2024-11-26 20:07:31.068063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.332 [2024-11-26 20:07:31.068069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 [2024-11-26 20:07:31.079758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.080183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.080198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.080204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.080354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.080504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.080509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.080514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.080519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 [2024-11-26 20:07:31.092489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.092922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.092935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.092944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.093094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.093255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.093261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.093266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.093271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 [2024-11-26 20:07:31.105082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.105557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.105570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.105575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.105724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.105874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.105880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.105885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.105889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 [2024-11-26 20:07:31.117699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.118244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.118274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.118283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.118451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.118604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.118610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.118616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.118622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 [2024-11-26 20:07:31.130310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.130879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.130909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.130918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.131083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.131247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.131254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.131260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.131266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3840281 Killed "${NVMF_APP[@]}" "$@" 00:29:30.333 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:30.333 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:30.333 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.333 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.333 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.333 [2024-11-26 20:07:31.142944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.333 [2024-11-26 20:07:31.143370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-26 20:07:31.143400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.333 [2024-11-26 20:07:31.143408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.333 [2024-11-26 20:07:31.143577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.333 [2024-11-26 20:07:31.143730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.333 [2024-11-26 20:07:31.143736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.333 [2024-11-26 20:07:31.143741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.333 [2024-11-26 20:07:31.143747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3841988 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3841988 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3841988 ']' 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.596 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.596 [2024-11-26 20:07:31.155588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.156059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.156074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.156079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.156238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.156389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.156395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.156400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.156405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.168224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.168685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.168698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.168703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.168853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.169002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.169008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.169013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.169018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.180836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.181290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.181320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.181329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.181498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.181651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.181657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.181663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.181669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.193518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.194028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.194183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.194334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.194347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.194354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.194359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.203241] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:30.596 [2024-11-26 20:07:31.203288] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.596 [2024-11-26 20:07:31.206186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.206721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.206752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.206761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.206928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.207081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.207088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.207094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.207100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.218794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.219268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.219298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.219307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.219476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.219629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.219635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.219641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.219647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.231479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.231964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.231978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.231984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.232134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.232289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.596 [2024-11-26 20:07:31.232299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.596 [2024-11-26 20:07:31.232305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.596 [2024-11-26 20:07:31.232310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.596 [2024-11-26 20:07:31.244206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.596 [2024-11-26 20:07:31.244692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.596 [2024-11-26 20:07:31.244722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.596 [2024-11-26 20:07:31.244731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.596 [2024-11-26 20:07:31.244896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.596 [2024-11-26 20:07:31.245049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.245056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.245061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.245068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.256899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.257460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.257490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.257500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.257665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.257818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.257824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.257830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.257836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.269524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.269961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.269975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.269981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.270131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.270287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.270293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.270298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.270307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.271616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.597 [2024-11-26 20:07:31.282131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.282566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.282597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.282606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.282772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.282925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.282932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.282937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.282943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.294794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.295279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.295295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.295301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.295451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.295601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.295607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.295612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.295617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.300661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.597 [2024-11-26 20:07:31.300683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.597 [2024-11-26 20:07:31.300690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.597 [2024-11-26 20:07:31.300695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.597 [2024-11-26 20:07:31.300700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.597 [2024-11-26 20:07:31.303174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.597 [2024-11-26 20:07:31.303277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.597 [2024-11-26 20:07:31.303392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.597 [2024-11-26 20:07:31.307434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.307909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.307940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.307950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.308121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.308281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.308288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.308294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.308300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.320115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.320610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.320626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.320632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.320784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.320934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.320940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.320946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.320950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.332775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.333218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.333238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.333389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.333539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.333545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.333550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.333555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.345372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.345848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.597 [2024-11-26 20:07:31.345860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.597 [2024-11-26 20:07:31.345866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.597 [2024-11-26 20:07:31.346015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.597 [2024-11-26 20:07:31.346170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.597 [2024-11-26 20:07:31.346181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.597 [2024-11-26 20:07:31.346186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.597 [2024-11-26 20:07:31.346191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.597 [2024-11-26 20:07:31.358005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.597 [2024-11-26 20:07:31.358434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-11-26 20:07:31.358447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-11-26 20:07:31.358453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.598 [2024-11-26 20:07:31.358603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.598 [2024-11-26 20:07:31.358753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.598 [2024-11-26 20:07:31.358758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.598 [2024-11-26 20:07:31.358764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.598 [2024-11-26 20:07:31.358769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.598 [2024-11-26 20:07:31.370726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.598 [2024-11-26 20:07:31.371261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-11-26 20:07:31.371294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-11-26 20:07:31.371303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.598 [2024-11-26 20:07:31.371474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.598 [2024-11-26 20:07:31.371627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.598 [2024-11-26 20:07:31.371633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.598 [2024-11-26 20:07:31.371639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.598 [2024-11-26 20:07:31.371645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.598 [2024-11-26 20:07:31.383332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.598 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.598 [2024-11-26 20:07:31.383916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-11-26 20:07:31.383948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-11-26 20:07:31.383957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.598 [2024-11-26 20:07:31.384124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.598 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:30.598 [2024-11-26 20:07:31.384294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.598 [2024-11-26 20:07:31.384302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.598 [2024-11-26 20:07:31.384312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.598 [2024-11-26 20:07:31.384317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.598 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.598 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.598 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.598 [2024-11-26 20:07:31.396016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.598 [2024-11-26 20:07:31.396636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-11-26 20:07:31.396666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-11-26 20:07:31.396677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.598 [2024-11-26 20:07:31.396843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.598 [2024-11-26 20:07:31.396997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.598 [2024-11-26 20:07:31.397003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.598 [2024-11-26 20:07:31.397008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.598 [2024-11-26 20:07:31.397014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.598 [2024-11-26 20:07:31.408708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.598 [2024-11-26 20:07:31.409261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.598 [2024-11-26 20:07:31.409291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.598 [2024-11-26 20:07:31.409300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.598 [2024-11-26 20:07:31.409471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.598 [2024-11-26 20:07:31.409624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.598 [2024-11-26 20:07:31.409630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.598 [2024-11-26 20:07:31.409636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.598 [2024-11-26 20:07:31.409641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.859 [2024-11-26 20:07:31.421381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.859 [2024-11-26 20:07:31.421872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-11-26 20:07:31.421887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.859 [2024-11-26 20:07:31.421893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.859 [2024-11-26 20:07:31.422043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.859 [2024-11-26 20:07:31.422198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.859 [2024-11-26 20:07:31.422204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.859 [2024-11-26 20:07:31.422215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.859 [2024-11-26 20:07:31.422220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.859 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.859 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.859 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.859 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.859 [2024-11-26 20:07:31.434039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.859 [2024-11-26 20:07:31.434544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.859 [2024-11-26 20:07:31.434641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-11-26 20:07:31.434672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-11-26 20:07:31.434682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.860 [2024-11-26 20:07:31.434848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.860 [2024-11-26 20:07:31.435001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.860 [2024-11-26 20:07:31.435008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.860 [2024-11-26 20:07:31.435013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.860 [2024-11-26 20:07:31.435019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 [2024-11-26 20:07:31.446708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.860 [2024-11-26 20:07:31.447226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-11-26 20:07:31.447242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-11-26 20:07:31.447247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.860 [2024-11-26 20:07:31.447398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.860 [2024-11-26 20:07:31.447548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.860 [2024-11-26 20:07:31.447554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.860 [2024-11-26 20:07:31.447559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.860 [2024-11-26 20:07:31.447564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.860 [2024-11-26 20:07:31.459383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.860 [2024-11-26 20:07:31.459841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-11-26 20:07:31.459854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-11-26 20:07:31.459863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.860 [2024-11-26 20:07:31.460013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.860 [2024-11-26 20:07:31.460170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.860 [2024-11-26 20:07:31.460176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.860 [2024-11-26 20:07:31.460181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.860 [2024-11-26 20:07:31.460186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.860 Malloc0 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 5080.67 IOPS, 19.85 MiB/s [2024-11-26T19:07:31.681Z] [2024-11-26 20:07:31.473129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.860 [2024-11-26 20:07:31.473610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-11-26 20:07:31.473640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-11-26 20:07:31.473649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.860 [2024-11-26 20:07:31.473814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.860 [2024-11-26 20:07:31.473968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.860 [2024-11-26 20:07:31.473974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.860 [2024-11-26 20:07:31.473980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.860 [2024-11-26 20:07:31.473986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 [2024-11-26 20:07:31.485824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.860 [2024-11-26 20:07:31.486255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-11-26 20:07:31.486285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4e010 with addr=10.0.0.2, port=4420 00:29:30.860 [2024-11-26 20:07:31.486294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e010 is same with the state(6) to be set 00:29:30.860 [2024-11-26 20:07:31.486462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4e010 (9): Bad file descriptor 00:29:30.860 [2024-11-26 20:07:31.486615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.860 [2024-11-26 20:07:31.486622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.860 [2024-11-26 20:07:31.486631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.860 [2024-11-26 20:07:31.486637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.860 [2024-11-26 20:07:31.498477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.860 [2024-11-26 20:07:31.498777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.860 20:07:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3840673 00:29:30.860 [2024-11-26 20:07:31.618051] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:32.747 5932.00 IOPS, 23.17 MiB/s [2024-11-26T19:07:34.511Z] 6783.50 IOPS, 26.50 MiB/s [2024-11-26T19:07:35.897Z] 7455.11 IOPS, 29.12 MiB/s [2024-11-26T19:07:36.842Z] 8004.40 IOPS, 31.27 MiB/s [2024-11-26T19:07:37.783Z] 8458.00 IOPS, 33.04 MiB/s [2024-11-26T19:07:38.821Z] 8809.67 IOPS, 34.41 MiB/s [2024-11-26T19:07:39.907Z] 9112.92 IOPS, 35.60 MiB/s [2024-11-26T19:07:40.849Z] 9379.07 IOPS, 36.64 MiB/s 00:29:40.028 Latency(us) 00:29:40.028 [2024-11-26T19:07:40.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.028 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.028 Verification LBA range: start 0x0 length 0x4000 00:29:40.028 Nvme1n1 : 15.01 9597.41 37.49 11743.09 0.00 5979.41 549.55 15619.41 00:29:40.028 [2024-11-26T19:07:40.849Z] =================================================================================================================== 00:29:40.028 [2024-11-26T19:07:40.849Z] Total : 9597.41 37.49 11743.09 0.00 5979.41 549.55 15619.41 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.028 rmmod nvme_tcp 00:29:40.028 rmmod nvme_fabrics 00:29:40.028 rmmod nvme_keyring 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3841988 ']' 00:29:40.028 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3841988 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3841988 ']' 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3841988 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841988 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841988' 00:29:40.029 killing process with pid 3841988 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3841988 00:29:40.029 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3841988 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.291 20:07:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.206 20:07:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.206 00:29:42.206 real 0m28.348s 00:29:42.206 user 1m3.354s 00:29:42.206 sys 0m7.798s 00:29:42.206 20:07:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.206 20:07:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.206 ************************************ 00:29:42.206 END TEST nvmf_bdevperf 00:29:42.206 ************************************ 00:29:42.206 20:07:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.206 20:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:42.206 20:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.206 20:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.469 ************************************ 00:29:42.469 START TEST nvmf_target_disconnect 00:29:42.469 ************************************ 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.469 * Looking for test storage... 00:29:42.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.469 --rc genhtml_branch_coverage=1 00:29:42.469 --rc genhtml_function_coverage=1 00:29:42.469 --rc genhtml_legend=1 00:29:42.469 --rc geninfo_all_blocks=1 00:29:42.469 --rc geninfo_unexecuted_blocks=1 00:29:42.469 00:29:42.469 ' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.469 --rc genhtml_branch_coverage=1 00:29:42.469 --rc genhtml_function_coverage=1 00:29:42.469 --rc genhtml_legend=1 00:29:42.469 --rc geninfo_all_blocks=1 00:29:42.469 --rc geninfo_unexecuted_blocks=1 00:29:42.469 00:29:42.469 ' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.469 --rc genhtml_branch_coverage=1 00:29:42.469 --rc genhtml_function_coverage=1 00:29:42.469 --rc genhtml_legend=1 00:29:42.469 --rc geninfo_all_blocks=1 00:29:42.469 --rc geninfo_unexecuted_blocks=1 00:29:42.469 00:29:42.469 ' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.469 --rc genhtml_branch_coverage=1 00:29:42.469 --rc genhtml_function_coverage=1 00:29:42.469 --rc genhtml_legend=1 00:29:42.469 --rc geninfo_all_blocks=1 00:29:42.469 --rc geninfo_unexecuted_blocks=1 00:29:42.469 00:29:42.469 ' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.469 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.470 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.731 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.732 20:07:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:50.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.874 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:50.875 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:50.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:50.875 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:29:50.875 00:29:50.875 --- 10.0.0.2 ping statistics --- 00:29:50.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.875 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:29:50.875 00:29:50.875 --- 10.0.0.1 ping statistics --- 00:29:50.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.875 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:50.875 ************************************ 00:29:50.875 START TEST nvmf_target_disconnect_tc1 00:29:50.875 ************************************ 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:50.875 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.875 [2024-11-26 20:07:50.971439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.875 [2024-11-26 20:07:50.971538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f3ae0 with addr=10.0.0.2, port=4420 00:29:50.875 [2024-11-26 20:07:50.971573] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:50.875 [2024-11-26 20:07:50.971585] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:50.875 [2024-11-26 20:07:50.971594] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:50.876 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:50.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:50.876 Initializing NVMe Controllers 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.876 00:29:50.876 real 0m0.144s 00:29:50.876 user 0m0.065s 00:29:50.876 sys 0m0.079s 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.876 20:07:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.876 ************************************ 00:29:50.876 END TEST nvmf_target_disconnect_tc1 00:29:50.876 ************************************ 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:50.876 ************************************ 00:29:50.876 START TEST nvmf_target_disconnect_tc2 00:29:50.876 ************************************ 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3848037 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3848037 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3848037 ']' 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.876 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:50.876 [2024-11-26 20:07:51.132246] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:50.876 [2024-11-26 20:07:51.132306] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.876 [2024-11-26 20:07:51.230028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.876 [2024-11-26 20:07:51.282316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.876 [2024-11-26 20:07:51.282365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.876 [2024-11-26 20:07:51.282374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.876 [2024-11-26 20:07:51.282385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.876 [2024-11-26 20:07:51.282392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.876 [2024-11-26 20:07:51.284417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:50.876 [2024-11-26 20:07:51.284579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:50.876 [2024-11-26 20:07:51.284740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.876 [2024-11-26 20:07:51.284741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:51.449 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.449 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:51.449 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.449 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.449 20:07:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.449 Malloc0 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.449 [2024-11-26 20:07:52.051100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.449 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.450 [2024-11-26 20:07:52.091524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3848160 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:51.450 20:07:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.366 20:07:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3848037 00:29:53.366 20:07:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Read completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 Write completed with error (sct=0, sc=8) 00:29:53.366 starting I/O failed 00:29:53.366 [2024-11-26 20:07:54.131450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.366 [2024-11-26 20:07:54.131908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-11-26 20:07:54.131936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-11-26 20:07:54.132176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-11-26 20:07:54.132190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-11-26 20:07:54.132651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-11-26 20:07:54.132716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-11-26 20:07:54.133067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-11-26 20:07:54.133081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.133531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.133595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.133969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.133984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.134458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.134522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.134892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.134906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.135443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.135506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.135916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.135931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.136435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.136501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.136767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.137101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.137113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.137244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.137257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.137564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.137584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.137916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.137928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.138260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.138272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.138571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.138582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.138788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.138802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.139147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.139473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.139485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.139833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.139845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.140193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.140206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.140582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.140593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.140938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.140951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.141282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.141294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.141721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.141733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.142204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.142217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.142396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.142408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.142659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.142671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.142970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.142983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.143295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.143308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.143630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.143641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.143964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.143976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.144339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.144352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.144556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.144568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.144888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.144901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.145235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.145247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.145553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.145565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.145924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.145936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.146180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.146193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.146563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.146575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-11-26 20:07:54.146878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.367 [2024-11-26 20:07:54.146889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.147115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.147127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.147454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.147467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.147803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.147814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.148139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.148508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.148520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.148865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.148877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.149108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.149120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.149497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.149510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.149837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.150144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.150157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.150527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.150539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.150890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.150902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.151121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.151133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.151451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.151464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.151769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.152096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.152108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.152553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.152566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.152917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.152929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.153268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.153279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.153610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.153620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.153979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.153993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.154226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.154240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.154570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.154584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.154904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.155284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.155297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.155605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.155619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.155932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.155946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.156187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.156201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.156511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.156524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.156834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.156849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.157206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.157221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.157532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.157546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.157836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.157849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.158168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.158181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.158572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.158584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.158940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.158954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.159276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.159291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-11-26 20:07:54.159593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.368 [2024-11-26 20:07:54.159613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.159921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.159934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.160340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.160357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.160681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.160694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.161015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.161029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.161363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.161377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.161687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.161700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.162082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.162096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.162444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.162768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.162782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.163098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.163112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.163402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.163417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.163728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.163741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.164052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.164066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.164391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.164404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.164695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.164707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.165015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.165028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.165399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.165726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.166065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.166079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.166313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.166326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.166556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.166569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.166902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.166920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.167227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.167245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.167591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.167608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.167818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.167838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.168151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.168180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.168373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.168391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.168710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.168726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.169049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.169066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.169401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.169418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.169745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.169763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.170084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.170103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.170483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.170500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.170820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.170836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.171129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.171146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.171410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.171427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.171773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.172106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.369 [2024-11-26 20:07:54.172124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.369 qpair failed and we were unable to recover it. 00:29:53.369 [2024-11-26 20:07:54.172464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.172486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.172813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.172831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.173153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.173180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.173565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.173581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.173913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.173933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.174287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.174305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.174625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.174644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.174854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.174874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.175202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.175221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.175473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.175490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.175806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.175823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.176141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.176166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.176491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.176507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.176851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.176873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.177242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.177264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.177646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.178019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.178040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.178385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.178409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.178767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.178789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.179134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.179156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.179507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.179529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.179880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.179901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.180256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.180278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.370 [2024-11-26 20:07:54.180625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.370 [2024-11-26 20:07:54.180645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.370 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.180998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.181022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.181383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.181407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.181753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.182121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.182142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.182542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.182565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.182915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.182938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.183315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.183337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.183748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.183776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.184114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.184138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.184508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.184531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.184887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.184908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.185229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.185251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.185573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.185594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.185911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.185935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.186264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.186287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.186610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.186633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.186975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.186996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.187307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.187329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.187547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.187906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.187930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.188268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.188290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.188533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.188562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.188928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.188957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.189319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.189349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.189710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.189738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.642 qpair failed and we were unable to recover it. 00:29:53.642 [2024-11-26 20:07:54.190083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.642 [2024-11-26 20:07:54.190114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.190452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.190482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.190841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.190869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.191230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.191261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.191701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.191731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.191969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.192001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.192414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.192445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.192812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.192842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.193199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.193231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.193592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.193621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.193877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.193906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.194250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.194281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.194647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.194675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.195020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.195050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.195448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.195480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.195714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.195747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.196105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.196135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.196625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.196982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.197012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.197420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.197762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.197791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.198152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.198197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.198525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.198554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.198817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.198853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.199205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.199237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.199620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.199648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.200006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.200393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.200423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.200784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.200813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.201182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.201211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.201599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.201629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.201986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.202018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.202376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.202405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.202747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.202776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.203140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.203183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.203608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.203637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.203994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.204023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.204392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.204423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.643 qpair failed and we were unable to recover it. 00:29:53.643 [2024-11-26 20:07:54.204791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.643 [2024-11-26 20:07:54.204820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.205201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.205233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.205589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.205619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.205873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.206266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.206297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.206532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.206922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.207324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.207354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.207740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.207770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.208104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.208133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.208433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.208814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.208843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.209205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.209243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.209581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.209611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.209958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.209988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.210409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.210830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.210859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.211229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.211259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.211594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.211623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.211995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.212023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.212390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.212421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.212767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.212797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.213183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.213215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.213557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.213586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.213948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.213977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.214337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.214367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.214763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.214797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.215153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.215194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.215442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.215472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.215853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.215882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.216223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.216252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.216609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.216637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.217002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.217032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.217479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.217509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.217772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.217801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.218182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.218214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.218469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.218501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.218893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.218922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.644 [2024-11-26 20:07:54.219295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.644 [2024-11-26 20:07:54.219328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.644 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.219692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.219723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.220114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.220144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.220511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.220542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.220909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.220937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.221311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.221341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.221706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.221734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.222101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.222130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.222543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.222574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.222917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.222947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.223244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.223274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.223648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.223678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.224043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.224072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.224412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.224442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.224808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.224837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.225209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.225246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.225592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.225622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.226021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.226052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.226296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.226327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.226721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.226750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.227107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.227136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.227487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.227518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.227887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.227916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.228281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.228313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.228664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.228694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.229011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.229048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.229393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.229425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.229789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.229818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.230187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.230217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.230671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.230701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.231064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.231094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.231306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.231336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.231688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.231724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.232102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.232476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.232765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.232794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.233185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.233217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.233453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.233486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.645 [2024-11-26 20:07:54.233853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.645 [2024-11-26 20:07:54.233882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.645 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.234254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.234285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.234658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.234687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.235042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.235071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.235437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.235476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.235831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.235861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.236291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.236322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.236671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.236700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.237133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.237184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.237529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.237560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.237919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.237949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.238318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.238714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.238743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.239121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.239151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.239524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.239553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.239936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.239967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.240231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.240262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.241041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.241071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.241449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.241479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.241840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.241868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.242234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.242265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.242616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.242647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.242982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.243011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.243353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.243383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.243744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.243773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.244119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.244148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.244614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.244645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.245011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.245040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.245408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.245440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.245773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.245802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.246192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.246223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.646 [2024-11-26 20:07:54.246566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.646 [2024-11-26 20:07:54.246596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.646 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.246942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.246970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.247343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.247373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.247714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.248104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.248133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.248513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.248542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.248896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.248926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.249287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.249318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.249698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.249726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.250072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.250101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.250469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.250499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.250892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.251264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.251295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.251646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.251681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.252038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.252068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.252438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.252468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.252906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.252935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.253287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.253318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.253711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.254068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.254098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.254443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.254474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.254829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.254858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.255225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.255256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.255629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.255667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.255989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.256018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.256432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.256463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.256809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.256840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.257226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.257576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.257606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.257975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.258005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.258371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.258748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.258777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.259143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.259194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.259541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.259571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.259914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.259943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.260312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.260344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.260714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.260744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.261128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.647 [2024-11-26 20:07:54.261493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.647 [2024-11-26 20:07:54.261522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.647 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.261890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.261920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.262265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.262297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.262672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.262702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.263045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.263074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.263456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.263488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.263731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.263761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.264195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.264226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.264592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.264624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.264967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.264996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.265374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.265404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.265676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.265705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.266074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.266104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.266481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.266891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.266922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.267285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.267316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.267668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.267698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.268066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.268094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.268455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.268486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.268841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.268870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.269240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.269271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.269582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.269611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.269985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.270014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.270396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.270667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.270700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.271047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.271078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.271462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.271493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.271836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.271864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.272227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.272259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.272537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.272568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.272777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.273169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.273200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.273437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.273470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.273853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.273883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.274229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.274260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.274621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.274651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.274993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.275021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.275389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.275419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.275780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.275810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.648 qpair failed and we were unable to recover it. 00:29:53.648 [2024-11-26 20:07:54.276206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.648 [2024-11-26 20:07:54.276238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.276632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.276661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.277034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.277062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.277438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.277468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.277702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.277741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.278132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.278175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.278579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.278609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.279016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.279413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.279444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.279835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.280195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.280227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.280653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.280683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.281061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.281405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.281436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.281787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.281816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.282186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.282216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.282620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.282650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.283010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.283039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.283410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.283812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.283841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.284206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.284237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.284622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.284661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.285026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.285056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.285392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.285422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.285773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.285801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.286196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.286227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.286561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.286986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.287351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.287381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.287751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.287779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.288148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.288193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.288470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.288498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.288847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.288878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.289233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.289265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.289597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.289626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.290012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.290399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.290430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.290794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.290823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.649 [2024-11-26 20:07:54.291181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.649 [2024-11-26 20:07:54.291212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.649 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.291472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.291502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.291869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.291898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.292291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.292322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.292686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.292724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.293091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.293120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.293507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.293537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.293906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.293935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.294283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.294314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.294682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.294710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.295065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.295094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.295434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.295464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.295804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.295835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.296222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.296574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.296961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.296990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.297228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.297260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.297633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.297663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.298058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.298424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.298453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.298840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.298869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.299123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.299155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.299527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.299557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.299919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.299948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.300320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.300352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.300714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.301119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.301149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.301411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.301440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.301794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.301822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.302198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.302229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.302592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.302621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.302969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.302999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.303307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.303672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.303701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.304063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.304099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.304446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.304477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.304837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.304868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.305226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.305258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.650 [2024-11-26 20:07:54.305608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.650 [2024-11-26 20:07:54.305639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.650 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.305996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.306025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.306387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.306419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.306793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.306823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.307186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.307217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.307654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.307683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.308044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.308074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.308423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.308453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.308792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.308821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.309187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.309217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.309582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.309612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.309972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.310010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.310343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.310373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.310738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.311015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.311044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.311472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.311501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.311880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.312243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.312274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.312673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.312702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.313061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.313091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.313468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.313498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.313863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.313894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.314243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.314274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.314556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.314584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.314822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.314855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.315206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.315236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.315588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.315617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.315981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.316011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.316387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.316419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.316758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.316787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.317185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.317617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.317646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.318004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.318033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.318388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.651 [2024-11-26 20:07:54.318419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.651 qpair failed and we were unable to recover it. 00:29:53.651 [2024-11-26 20:07:54.318785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.318815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.319201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.319234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.319619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.319648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.320013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.320048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.320415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.320447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.320824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.320853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.321228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.321282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.321643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.321674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.322033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.322062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.322329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.322360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.322729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.322759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.323124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.323155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.323524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.323554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.323776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.323806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.324183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.324213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.324570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.324599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.324960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.324989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.325378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.325410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.325654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.325687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.326071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.326100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.326410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.326757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.327148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.327206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.327604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.327633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.327994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.328397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.328428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.328674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.328702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.328948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.328977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.329429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.329459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.329801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.329832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.330188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.330225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.330605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.330634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.330975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.331004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.331395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.331426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.331783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.331813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.332174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.332206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.332557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.332586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.332885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.652 qpair failed and we were unable to recover it. 00:29:53.652 [2024-11-26 20:07:54.333259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.652 [2024-11-26 20:07:54.333290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.333662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.333691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.334130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.334171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.334559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.334589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.334946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.334974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.335345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.335375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.335771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.335801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.336173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.336204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.336565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.336594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.336944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.336975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.337340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.337372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.337725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.337754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.338121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.338150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.338404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.338759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.338789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.339148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.339206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.339555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.339584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.339951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.339979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.340354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.340384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.340753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.340782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.341148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.341192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.341549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.341578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.341951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.341980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.342246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.342277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.342673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.342704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.343123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.343485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.343516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.343881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.343910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.344270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.344301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.344736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.344764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.345124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.345153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.345437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.345467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.345869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.345898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.346134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.346203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.346599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.346629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.347024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.347394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.347425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.347784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.347814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.653 [2024-11-26 20:07:54.348216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.653 qpair failed and we were unable to recover it. 00:29:53.653 [2024-11-26 20:07:54.348647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.348677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.349016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.349046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.349417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.349448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.349758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.349790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.350138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.350178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.350521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.350551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.350907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.350937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.351292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.351323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.351769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.351800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.352174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.352206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.352440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.352472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.352819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.352850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.353200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.353229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.353595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.353623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.354046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.354076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.354406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.354438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.354800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.354830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.355080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.355110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.355499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.355530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.355925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.356277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.356308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.356735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.356772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.357103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.357134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.357507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.357878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.357907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.358270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.358299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.358676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.358706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.359070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.359100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.359561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.359592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.359936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.359965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.360331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.360362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.360701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.360730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.361080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.361110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.361465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.361496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.361852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.361881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.362244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.362275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.362663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.363031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.363060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.654 [2024-11-26 20:07:54.363314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.654 [2024-11-26 20:07:54.363343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.654 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.363694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.363724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.364059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.364089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.364441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.364472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.364858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.365220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.365250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.365630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.366034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.366065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.366410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.366440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.366797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.366825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.367194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.367583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.367611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.367981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.368013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.368346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.368377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.368637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.368665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.369010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.369039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.369291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.369324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.369690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.369719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.370477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.370509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.371228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.371258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.371607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.371638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.372013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.372042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.372387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.372430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.372818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.372848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.373205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.373236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.373695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.373724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.374089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.374119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.374476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.374507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.374906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.374935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.375294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.375756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.375784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.376148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.376193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.376450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.376482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.376748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.377114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.377145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.377546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.377576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.377932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.655 qpair failed and we were unable to recover it. 00:29:53.655 [2024-11-26 20:07:54.378326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.655 [2024-11-26 20:07:54.378357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.378767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.378798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.379156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.379198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.379551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.379580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.379984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.380012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.380336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.380367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.380742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.380773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.381000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.381032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.381414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.381445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.381841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.381872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.382239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.382271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.382664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.382696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.383053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.383091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.383456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.384232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.384263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.384645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.384676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.385035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.385065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.385471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.385502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.385861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.385893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.386317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.386350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.386710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.386740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.387112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.387142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.387529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.387906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.387937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.388277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.388309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.388662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.388692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.389057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.389087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.389453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.389484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.389833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.389862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.390215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.390246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.390639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.391005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.391036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.656 [2024-11-26 20:07:54.391403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.656 [2024-11-26 20:07:54.391442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.656 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.391774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.391804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.392181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.392213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.392580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.392608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.392979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.393010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.393383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.393417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.393773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.393803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.394197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.394228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.394649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.394678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.394920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.394950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.395301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.395333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.395766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.395796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.396051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.396081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.396448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.396480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.396847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.396878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.397241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.397274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.397615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.397646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.397984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.398014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.398379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.398412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.398769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.398800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.399175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.399213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.399613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.399643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.400069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.400100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.400456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.400488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.400842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.400873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.401124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.401155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.401522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.401552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.401953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.401984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.402337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.402368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.402751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.403111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.403142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.403505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.403534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.403895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.403925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.404290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.404321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.404583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.404612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.404982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.405013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.405387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.405418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.405758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.405788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.406150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.406199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.657 qpair failed and we were unable to recover it. 00:29:53.657 [2024-11-26 20:07:54.408886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.657 [2024-11-26 20:07:54.408961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.409362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.409403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.409800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.409831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.410199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.410232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.410593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.410625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.410919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.410947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.411221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.411252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.411664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.411696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.412053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.412082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.412351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.412384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.412625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.412658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.413088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.413502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.413533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.413902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.413935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.414193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.414225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.414615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.414647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.414979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.415012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.415379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.415409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.415755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.415786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.416175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.416416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.416448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.416802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.416831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.417184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.417218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.417459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.417489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.417838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.417868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.418229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.418262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.418628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.418660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.419025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.419056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.419429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.419459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.419821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.419852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.420220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.420254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.420618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.420648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.421007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.421038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.421407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.421438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.421792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.421823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.422185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.422217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.422590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.422622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.422985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.423019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.423396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.658 [2024-11-26 20:07:54.423427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.658 qpair failed and we were unable to recover it. 00:29:53.658 [2024-11-26 20:07:54.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.423701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.424077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.424435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.424466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.424872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.424902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.425336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.425371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.425731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.425761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.426100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.426129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.426564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.426596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.426974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.427004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.427348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.427380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.427747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.427783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.428149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.428192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.428544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.428575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.428820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.428853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.429220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.429252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.429629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.429661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.430026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.430057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.430421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.430455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.430830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.430859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.431218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.431249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.431610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.431643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.431978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.432010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.432369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.432402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.432774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.432806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.433036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.433068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.433499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.433529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.433901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.433931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.434296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.434328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.434693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.434724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.435086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.435117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.435387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.435418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.435769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.435798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.436135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.436182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.436605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.436636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.436975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.437005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.437234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.437269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.437544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.437573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.437921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.437950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.659 [2024-11-26 20:07:54.438311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.659 [2024-11-26 20:07:54.438343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.659 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.438692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.438723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.439062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.439096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.439489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.439523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.439889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.439918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.440277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.440308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.440730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.440760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.441104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.441135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.441495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.441526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.441899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.441928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.442319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.442672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.442702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.443096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.443458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.443496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.443832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.443864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.444222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.444254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.444639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.444669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.444945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.445295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.445326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.445732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.445764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.446116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.446150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.447963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.448031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.660 [2024-11-26 20:07:54.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.660 [2024-11-26 20:07:54.448518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.660 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.450179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.450240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.450655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.450692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.452359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.452417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.452783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.452817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.453197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.453230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.453606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.453636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.453977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.454008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.454384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.454414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.454775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.454805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.455208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.455239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.455589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.455622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.455969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.456000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.456349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.456380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.456765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.457124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.457154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.457460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.457491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.457852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.457882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.458244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.458548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.458578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.458963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.458993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.459352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.459384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.459783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.459813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.460044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.460074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.460485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.460519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.460858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.460888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.461259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.461290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.932 [2024-11-26 20:07:54.461558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.932 [2024-11-26 20:07:54.461587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.932 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.461932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.461964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.462314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.462346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.462722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.463115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.463146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.463520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.463561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.463898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.463930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.464266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.464297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.464652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.464682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.465024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.465055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.465422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.465454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.465819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.465849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.466103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.466135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.466520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.466553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.466896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.466927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.467277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.467310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.467678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.467708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.467967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.467997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.468393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.468426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.468801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.468832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.469195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.469225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.469618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.469647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.470014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.470044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.470417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.470447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.470663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.470694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.471047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.471077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.471435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.471470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.471822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.472215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.472245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.472601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.472632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.472999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.473029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.473411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.473796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.473832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.474202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.474233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.474506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.474535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.474685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.474718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.475074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.475104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.475511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.475544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.475904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.933 [2024-11-26 20:07:54.475934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.933 qpair failed and we were unable to recover it. 00:29:53.933 [2024-11-26 20:07:54.476304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.476336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.476699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.476728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.477096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.477124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.477506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.477537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.477896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.477925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.478177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.478206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.478576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.478604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.478977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.479005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.479365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.479396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.479765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.479793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.480155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.480200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.480543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.480572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.480937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.480966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.481364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.481713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.481742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.482106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.482135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.482440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.482469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.482829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.482858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.483212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.483244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.483617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.483646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.484020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.484054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.484391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.484422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.484774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.484803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.485057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.485088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.485312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.485344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.485745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.485775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.486030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.486058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.486426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.486456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.486709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.486737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.487096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.487124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.487472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.487502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.487861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.487890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.488238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.488269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.488636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.488666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.489002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.489032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.489391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.489421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.489668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.489700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.490055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.490085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.934 qpair failed and we were unable to recover it. 00:29:53.934 [2024-11-26 20:07:54.490425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.934 [2024-11-26 20:07:54.490456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.490805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.490834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.491194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.491225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.491616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.491643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.492011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.492040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.492295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.492326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.492716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.492745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.493113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.493144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.493494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.493524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.493889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.493917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.494288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.494319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.494695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.494724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.495090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.495119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.495484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.495514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.495854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.495883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.496120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.496151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.496551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.496580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.496939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.496968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.497339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.497369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.497613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.497642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.498004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.498034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.498475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.498505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.498859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.498888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.499258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.499294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.499639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.499668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.500335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.500365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.500730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.500759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.501108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.501137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.501520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.501550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.501813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.501841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.502188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.502218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.502578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.502607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.502977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.503005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.503349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.503379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.503792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.503821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.504064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.504093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.504519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.504550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.935 [2024-11-26 20:07:54.504912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.935 [2024-11-26 20:07:54.504941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.935 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.505207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.505239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.505643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.505672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.506062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.506432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.506464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.506826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.506854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.507086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.507117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.507498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.507858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.507887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.508238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.508268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.508613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.508642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.509001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.509031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.509394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.509430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.509679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.509711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.510068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.510098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.510429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.510460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.510820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.510849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.511093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.511125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.511485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.511515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.511882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.512279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.512309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.512717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.512746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.512985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.513013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.513353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.513385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.513635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.513665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.514054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.514084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.514440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.514809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.514838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.515205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.515235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.515635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.515670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.516057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.516087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.516480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.516509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.516735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.936 [2024-11-26 20:07:54.516764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.936 qpair failed and we were unable to recover it. 00:29:53.936 [2024-11-26 20:07:54.517132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.517171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.517521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.517549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.517913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.517941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.518317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.518715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.518743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.519101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.519130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.519515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.519545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.519913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.519941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.520314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.520716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.520746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.521106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.521136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.521518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.521550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.521886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.521915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.522177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.522208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.522568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.522598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.522969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.522999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.523338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.523369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.523624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.523653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.524019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.524048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.524402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.524434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.524690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.524731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.525124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.525502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.525533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.525897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.525925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.526290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.526320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.526689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.526718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.527085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.527113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.527461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.527493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.527873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.527902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.528156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.528202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.528559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.528589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.528931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.528961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.529349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.529379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.529637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.529667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.529923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.529956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.530316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.530348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.530699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.530729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.531093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.531123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.937 qpair failed and we were unable to recover it. 00:29:53.937 [2024-11-26 20:07:54.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.937 [2024-11-26 20:07:54.531527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.531892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.531923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.532289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.532320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.532576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.532606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.532958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.532987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.533347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.533379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.533748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.533778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.534189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.534565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.534596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.534961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.534990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.535242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.535273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.535640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.535669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.535905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.535938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.536193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.536224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.536584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.536971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.537001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.537345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.537375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.537741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.537771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.538106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.538137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.538555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.538586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.538977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.539213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.539243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.539607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.539638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.539801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.539834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.540218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.540250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.540625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.540654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.541028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.541058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.541478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.541508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.541853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.541883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.542232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.542262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.542640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.542668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.542923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.542951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.543348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.543378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.543793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.543822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.544189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.544221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.544627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.544656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.545009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.545038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.545273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.545306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.545682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.938 [2024-11-26 20:07:54.545711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.938 qpair failed and we were unable to recover it. 00:29:53.938 [2024-11-26 20:07:54.546074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.546103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.546490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.546523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.546883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.546911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.547261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.547292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.547677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.547705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.548067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.548096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.548454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.548484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.548791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.548820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.549182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.549214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.549580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.549609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.549980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.550349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.550385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.550754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.550783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.551149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.551189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.551436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.551468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.551841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.551870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.552232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.552262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.552604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.552994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.553023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.553396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.553427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.553757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.553785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.554150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.554189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.554557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.554585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.554933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.554961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.555316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.555346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.555714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.555743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.556084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.556112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.556487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.556517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.556858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.556888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.557235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.557266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.557611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.557640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.557993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.558021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.558368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.558398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.558635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.558664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.559047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.559076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.559432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.559462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.559811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.559840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.560126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.560154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.939 qpair failed and we were unable to recover it. 00:29:53.939 [2024-11-26 20:07:54.560526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.939 [2024-11-26 20:07:54.560555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.560987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.561016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.561393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.561797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.561825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.562191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.562222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.562659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.563017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.563045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.563400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.563431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.563785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.563814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.564178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.564208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.564462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.564491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.564848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.564877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.565239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.565633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.565662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.566029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.566063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.566399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.566431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.566764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.567154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.567567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.567595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.568012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.568041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.568394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.568777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.568807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.569053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.569416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.569447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.569802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.569831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.570183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.570214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.570584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.570612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.570976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.571005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.571387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.571419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.571760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.571788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.572144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.572182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.572539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.572568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.572881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.572910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.573281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.573311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.573671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.573701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.574094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.574122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.574538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.574567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.574814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.574841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.575190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.940 [2024-11-26 20:07:54.575220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.940 qpair failed and we were unable to recover it. 00:29:53.940 [2024-11-26 20:07:54.575576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.575605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.575965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.576658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.576686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.577041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.577071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.577447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.577478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.577819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.577849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.578329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.578367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.578716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.578751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.579138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.579485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.579515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.579877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.579907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.580270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.580300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.580677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.580706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.581059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.581088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.581345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.581375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.581656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.581686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.582037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.582066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.582423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.582454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.582886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.582915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.583267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.583297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.583659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.583688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.584059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.584088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.584447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.584477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.584842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.584870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.585240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.585645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.585674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.586040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.586069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.586411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.586441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.586793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.586822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.587196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.587226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.587602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.587639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.588009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.588038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.941 [2024-11-26 20:07:54.588385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.941 [2024-11-26 20:07:54.588415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.941 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.588821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.588849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.589218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.589249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.589607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.589636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.589990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.590019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.590397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.590428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.590788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.590816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.591182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.591212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.591558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.591587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.591946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.591974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.592334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.592369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.592726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.592755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.593119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.593147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.593514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.593543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.593861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.593890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.594244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.594275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.594638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.594676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.595019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.595047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.595396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.595427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.595689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.595718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.596080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.596109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.596478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.596509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.596883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.596913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.597290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.597320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.597709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.597739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.598195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.598226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.598584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.598613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.598977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.599007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.599378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.599408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.599663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.599692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.600043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.600072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.600442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.600471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.600829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.600857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.601238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.601268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.601611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.601639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.601989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.602017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.602304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.602335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.602670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.602711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.942 [2024-11-26 20:07:54.603047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.942 [2024-11-26 20:07:54.603075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.942 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.603426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.603457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.603820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.603850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.604215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.604245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.604603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.604631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.605003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.605031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.605407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.605437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.605792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.605820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.606190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.606220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.606554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.606584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.606934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.606962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.607327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.607357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.607610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.607639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.608026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.608055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.608335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.608364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.608713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.608743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.609090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.609119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.609485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.609514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.609749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.609778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.610142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.610184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.610520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.610550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.610922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.610951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.611320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.611352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.611697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.611726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.612081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.612110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.612522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.612553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.612911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.612939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.613303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.613766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.613795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.614126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.614154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.614542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.614572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.614941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.614970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.615240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.615270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.615620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.616011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.616040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.616408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.616445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.616764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.617145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.617183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.617546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.943 [2024-11-26 20:07:54.617583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.943 qpair failed and we were unable to recover it. 00:29:53.943 [2024-11-26 20:07:54.617948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.617976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.618392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.618428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.618771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.618801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.619054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.619083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.619524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.619555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.619906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.619935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.620298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.620328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.620697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.620725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.621090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.621118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.621477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.621507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.621877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.621906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.622273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.622303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.622658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.623030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.623059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.623854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.623882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.624235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.624265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.624634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.624670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.625035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.625063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.625299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.625328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.625689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.626085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.626113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.626485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.626515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.626879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.626908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.627270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.627300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.627663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.627691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.628057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.628087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.628464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.628493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.628846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.628880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.629313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.629344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.629708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.629738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.630099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.630128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.630505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.630536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.630886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.630915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.631270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.631301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.631666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.631694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.632048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.632076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.632444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.632476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.944 [2024-11-26 20:07:54.632820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.944 [2024-11-26 20:07:54.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.944 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.633185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.633216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.633597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.633627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.633857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.633890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.634293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.634324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.634704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.634734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.635101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.635129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.635503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.635533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.635893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.635922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.636287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.636318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.636585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.636619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.636997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.637026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.637396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.637427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.637778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.637807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.638186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.638217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.638576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.638608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.638948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.638979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.639341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.639373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.639716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.639747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.640111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.640142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.640391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.640422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.640782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.640813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.641181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.641214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.641566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.641595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.641960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.641988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.642346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.642379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.642642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.642672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.643017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.643048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.643390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.643420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.643773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.643803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.644170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.644202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.644537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.644573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.644922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.644951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.645239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.645270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.645540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.645569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.645948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.645979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.646345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.646375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.646737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.646767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.945 qpair failed and we were unable to recover it. 00:29:53.945 [2024-11-26 20:07:54.647133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.945 [2024-11-26 20:07:54.647189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.647606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.647637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.647979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.648009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.648392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.648425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.648761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.648791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.649241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.649275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.649636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.649665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.650114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.650143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.650574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.650604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.650971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.651001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.651384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.651415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.651779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.651808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.652178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.652208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.652547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.652577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.652929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.652959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.653323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.653354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.653702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.653733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.653982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.654012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.654388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.654421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.654777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.654807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.655182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.655212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.655596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.655626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.655983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.656013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.656430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.656805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.657179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.657211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.657568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.657604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.657980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.658323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.658362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.658612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.658642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.659022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.659393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.659423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.659777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.659806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.660042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.660072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.660321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.660354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.946 qpair failed and we were unable to recover it. 00:29:53.946 [2024-11-26 20:07:54.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.946 [2024-11-26 20:07:54.660754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.661108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.661136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.661529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.661559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.661927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.661957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.662317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.662348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.662755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.663137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.663180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.663529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.663559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.663923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.663952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.664318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.664350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.664712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.664743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.664997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.665027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.665423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.665455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.665838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.665871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.666232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.666261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.666662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.666693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.667052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.667082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.667447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.667479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.667855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.667887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.668274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.668309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.668686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.668715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.669082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.669112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.669474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.669504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.669861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.669891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.670253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.670283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.670521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.670553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.670798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.670837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.671278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.671308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.671694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.671727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.672099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.672513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.672543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.672914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.672946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.673207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.673238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.673579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.673612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.673979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.674008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.674389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.674421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.674663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.674693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.675074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.675103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.675380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.947 [2024-11-26 20:07:54.675411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.947 qpair failed and we were unable to recover it. 00:29:53.947 [2024-11-26 20:07:54.675763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.676127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.676174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.676573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.676934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.676966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.677333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.677367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.677730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.678121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.678152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.678439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.678474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.678841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.678872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.679239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.679623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.679656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.679950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.680288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.680319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.680731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.681097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.681125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.681562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.681594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.681946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.681978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.682329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.682360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.682718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.682750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.683115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.683145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.683418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.683450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.683682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.683713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.684055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.684084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.684242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.684277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.684667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.684696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.685056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.685457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.685487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.685870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.685899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.686273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.686305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.686554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.686583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.686967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.686996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.687453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.687791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.687821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.688079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.688108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.688473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.688880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.688909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.689234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.689263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.689643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.689671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.948 [2024-11-26 20:07:54.690033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.948 [2024-11-26 20:07:54.690063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.948 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.690410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.690440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.690801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.690831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.691228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.691591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.691619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.691991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.692020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.692388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.692418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.692778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.692807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.693188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.693218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.693579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.693607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.693969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.693999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.694362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.694392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.694759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.694788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.695146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.695185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.695527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.695556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.695918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.695947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.696300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.696329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.696695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.696729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.697082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.697110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.697479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.697509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.697881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.697909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.698274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.698304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.698669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.698698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.699045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.699074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.699323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.699353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.699719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.699747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.700087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.700116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.700494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.700524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.700903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.700932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.701181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.701215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.701613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.701642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.702009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.702038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.702407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.702783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.702812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.703065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.703092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.703443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.703473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.703845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.703873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.704234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.704263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.704605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.949 [2024-11-26 20:07:54.704633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.949 qpair failed and we were unable to recover it. 00:29:53.949 [2024-11-26 20:07:54.704998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.705027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.705278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.705306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.705668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.705697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.706042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.706071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.706435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.706465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.706822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.706851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.707291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.707320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.707662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.707690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.708030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.708059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.708476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.708506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.708848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.708877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.709116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.709145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.709526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.709556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.709794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.709825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.710217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.710607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.710635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.711009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.711037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.711432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.711787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.711815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.712178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.712214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.712631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.712659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.713043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.713467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.713496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.713855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.713883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.714258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.714523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.714552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.714919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.714948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.715312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.715341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.715579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.715608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.715859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.715886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.716090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.716118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.716462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.716492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.716870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.716898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.717264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.950 [2024-11-26 20:07:54.717294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.950 qpair failed and we were unable to recover it. 00:29:53.950 [2024-11-26 20:07:54.717667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.717704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.718062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.718091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.718346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.718382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.718734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.718764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.719125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.719155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.719536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.719566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.719928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.719958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.720317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.720347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.720719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.720747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.720863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.720894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.721127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.721156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.721449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.721479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.721826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.721862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.722214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.722501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.722530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.722892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.722921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.723285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.723315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.723683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.723712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.724076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.724104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.724460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.724490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.724882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.725243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.725272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.725649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.726041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.726069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.726399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.726431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.726791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.726820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.727188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.727219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.727567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.727595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.727971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.727999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.728342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.728373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.728736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.729125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.729153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.729531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.729561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.729807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.729835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.730196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.730228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.730565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.730593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.730952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.730980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.731342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.951 [2024-11-26 20:07:54.731619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.951 [2024-11-26 20:07:54.731651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.951 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.732019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.732048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.732417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.732449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.732885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.733281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.733310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.733656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.733685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.734048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.734077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.734487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.734517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.734881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.734910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.735170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.735201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.735587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.735615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.735986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.736015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.736393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.736424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.736776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.736806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.737148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.737189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.737582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.737617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.737985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.738014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:53.952 [2024-11-26 20:07:54.738393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.952 [2024-11-26 20:07:54.738424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:53.952 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.738765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.738798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.739192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.739224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.739644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.739674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.740026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.740054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.740414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.740446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.740789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.740818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.741180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.741212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.741577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.741606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.741959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.741988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.742374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.742404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.742763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.742791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.743139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.743179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.743505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.743534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.743896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.743924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.744271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.744300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.744650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.744679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.745039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.745068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.745434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.745462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.745849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.745878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.746199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.746229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.746592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.746622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.747057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.747086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.747450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.747479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.747734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.747762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.748117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.748152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.748530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 20:07:54.748559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 20:07:54.748922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.748950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.749300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.749331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.749694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.750086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.750115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.750463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.750494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.750732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.750760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.751108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.751137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.751530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.751560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.751911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.751940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.752319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.752350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.752705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.752733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.753115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.753535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.753566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.753925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.753953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.754313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.754342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.754721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.754750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.755122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.755150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.755502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.755532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.755893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.755921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.756280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.756310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.756554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.756582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.756958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.756987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.757290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.757327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.757660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.757688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.758047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.758076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.758436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.758466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.758809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.758838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.759212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.759242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.759613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.759643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.760003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.760033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.760399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.760429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.760792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.760821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.761259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.761289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.761656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.761687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.762045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.762074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.762435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.762465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.762801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.762830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.763194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 20:07:54.763224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 20:07:54.763461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.763489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.763792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.763827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.764181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.764213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.764549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.764578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.764933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.764961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.765321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.765350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.765728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.765757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.766116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.766144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.766522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.766550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.766931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.766959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.767325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.767355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.767718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.767747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.768118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.768147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.768522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.768552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.768797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.768825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.769195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.769226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.769605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.769635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.769969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.769997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.770393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.770842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.770870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.771238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.771269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.771651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.771679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.772040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.772069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.772438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.772466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.772832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.772861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.773222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.773252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.773633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.773661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.774011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.774040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.774418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.774454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.774819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.774848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.775195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.775224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.775584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.775613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.775847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.775875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.776244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.776274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.776648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.776677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.777036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.777065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.777407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.777438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.777690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.777721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 20:07:54.778080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 20:07:54.778110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.778524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.778887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.778916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.779273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.779303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.779668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.779697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.780037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.780357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.780630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.780658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.781012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.781042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.781398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.781429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.781787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.782179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.782209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.782561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.782589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.782951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.782979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.783338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.783368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.783622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.783650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.784000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.784028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.784417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.784775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.784804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.785186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.785217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.785562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.785591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.785951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.785981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.786341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.786372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.786728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.786756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.787145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.787400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.787429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.787767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.787797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.788153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.788193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.788445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.788477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.788827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.788856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.789098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.789126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.789476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.789513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.789871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.789900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.790273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.790634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.790663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.791033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.791420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.791451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.791793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.791821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 20:07:54.792186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 20:07:54.792218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.792576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.792604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.793041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.793070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.793353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.793747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.793776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.794133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.794187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.794566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.794595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.794959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.794990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.795248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.795280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.795643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.795671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.796017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.796046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.796413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.796443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.796692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.796724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.797091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.797120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.797519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.797549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.797907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.797940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.798300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.798331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.798717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.799075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.799106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.799511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.799541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.799792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.799821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.800185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.800597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.800626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.800999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.801268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.801302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.801697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.801727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.802145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.802540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.802569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.802949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.803344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.803373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.803750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.803779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.804135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.804177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.228 [2024-11-26 20:07:54.804456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.228 [2024-11-26 20:07:54.804485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.228 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.804852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.804882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.805183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.805214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.805502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.805531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.805950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.806330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.806630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.806663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.807002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.807032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.807402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.807793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.807822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.808184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.808214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.808614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.808643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.808985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.809014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.809353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.809384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.809794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.809825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.810191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.810222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.810610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.810979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.811008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.811267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.811299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.811754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.811785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.812152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.812554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.812584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.812938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.812968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.813340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.813370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.813708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.813738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.813977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.814009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.814403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.814434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.814792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.814822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.815183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.815212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.815570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.815607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.815864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.815894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.816240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.816271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.816615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.816645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.817005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.817035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.817285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.817315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.817696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.817725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.818093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.818123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.818494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.818524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.818886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.229 [2024-11-26 20:07:54.818916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.229 qpair failed and we were unable to recover it. 00:29:54.229 [2024-11-26 20:07:54.819272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.819302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.819665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.819694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.820054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.820083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.820428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.820459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.820813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.820842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.821215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.821247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.821514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.821543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.821883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.821914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.822278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.822309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.822672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.822701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.823064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.823093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.823473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.823853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.823882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.824126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.824154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.824554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.824584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.824949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.824978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.825344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.825374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.825613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.825643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.825984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.826015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.826380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.826411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.826782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.826812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.827179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.827211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.827572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.827601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.827968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.827997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.828379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.828408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.828779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.829141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.829183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.829547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.829577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.829942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.829972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.830326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.830358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.830706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.830735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.831084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.831120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.831552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.831583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.831938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.831968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.832217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.832251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.832620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.832896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.832928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.833281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.833312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.230 [2024-11-26 20:07:54.833660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.230 [2024-11-26 20:07:54.833690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.230 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.834136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.834185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.834548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.834577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.834944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.834973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.835339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.835368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.835724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.835752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.836194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.836225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.836589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.836618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.836986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.837014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.837389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.837419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.837751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.837779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.838145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.838183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.838521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.838549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.838909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.838937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.839306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.839336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.839718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.839748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.840111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.840139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.840507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.840539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.840884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.840913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.841302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.841333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.841690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.841728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.842085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.842114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.842392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.842422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.842683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.842711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.843059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.843088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.843446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.843476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.843843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.843872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.844247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.844277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.844629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.844660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.844907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.844936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.845278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.845316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.845657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.845686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.846047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.846076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.846344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.846373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.846767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.846796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.847146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.847189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.847615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.847643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.848006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.848035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.848414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.848445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.231 qpair failed and we were unable to recover it. 00:29:54.231 [2024-11-26 20:07:54.848813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.231 [2024-11-26 20:07:54.848841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.849214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.849244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.849629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.849658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.850029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.850058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.850499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.850873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.850903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.851244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.851274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.851569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.851596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.851984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.852013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.852374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.852404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.852788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.852816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.853184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.853214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.853576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.853604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.853960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.853988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.854276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.854667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.855008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.855037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.855410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.855440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.855708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.855736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.856106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.856134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.856521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.856551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.856979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.857007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.857376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.857415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.857789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.857818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.858188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.858579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.858609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.858946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.858975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.859232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.859262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.859611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.859639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.859999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.860028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.860369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.860399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.860766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.860794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.861195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.861493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.861522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.861883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.861912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.862262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.862293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.862652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.862681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.863040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.863067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.863319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.232 [2024-11-26 20:07:54.863349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.232 qpair failed and we were unable to recover it. 00:29:54.232 [2024-11-26 20:07:54.863602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.863631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.864058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.864087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.864432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.864462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.864824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.864853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.865263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.865292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.865665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.865696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.866034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.866064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.866428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.866459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.866838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.866867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.867224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.867671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.867972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.868008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.868391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.868421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.868782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.868811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.869178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.869207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.869561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.869589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.869959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.869988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.870367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.870398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.870759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.870788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.871148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.871187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.871577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.871946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.871974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.872321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.872717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.872745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.873119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.873148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.873404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.873434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.873806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.873835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.874228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.874592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.874621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.874936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.875302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.875332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.875700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.875728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.876098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.876126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.876499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.233 [2024-11-26 20:07:54.876530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.233 qpair failed and we were unable to recover it. 00:29:54.233 [2024-11-26 20:07:54.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.876916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.877147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.877190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.877555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.877584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.878008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.878470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.878843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.878872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.879239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.879269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.879639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.880003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.880032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.880389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.880418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.880779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.880807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.881183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.881213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.881586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.881614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.881959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.881987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.882357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.882623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.882652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.883011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.883042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.883307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.883343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.883743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.883772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.884126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.884155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.884605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.884637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.884994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.885024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.885418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.885768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.885798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.886032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.886064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.886426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.886457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.886723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.886754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.886994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.887025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.887404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.887435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.887786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.887817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.888179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.888209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.888566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.888596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.888960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.888992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.889346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.889377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.889724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.889754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.890005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.890036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.890444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.890475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.890841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.890871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.234 qpair failed and we were unable to recover it. 00:29:54.234 [2024-11-26 20:07:54.891253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.234 [2024-11-26 20:07:54.891284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.891641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.891671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.892024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.892052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.892429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.892459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.892820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.892849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.893206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.893237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.893601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.893636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.893994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.894024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.894330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.894362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.894725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.894756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.895185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.895217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.895583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.895612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.895984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.896013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.896381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.896410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.896767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.896797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.897179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.897209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.897577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.897607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.897964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.897994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.898331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.898361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.898731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.898759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.899120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.899151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.899560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.899919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.899949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.900390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.900423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.900788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.900817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.901185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.901215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.901818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.901849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.902081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.902111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.902272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.902305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.902692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.902722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.903066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.903098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.903446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.903477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.903834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.903864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.904311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.904343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.904699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.904729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.905102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.905130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.905369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.905403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.905750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.905780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.906151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.235 [2024-11-26 20:07:54.906195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.235 qpair failed and we were unable to recover it. 00:29:54.235 [2024-11-26 20:07:54.906584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.906613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.906982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.907011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.907491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.907861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.907889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.908252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.908282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.908620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.908651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.909082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.909110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.909481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.909511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.909873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.909910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.910293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.910323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.910689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.910720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.910958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.910986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.911338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.911370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.911727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.911756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.912115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.912144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.912565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.912597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.912949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.912980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.913366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.913397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.913734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.913764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.914108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.914136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.914523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.914553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.914792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.914821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.915223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.915560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.915589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.915948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.915978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.916349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.916381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.916745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.916773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.917118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.917148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.917430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.917462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.917823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.917852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.918190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.918222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.918471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.918500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.918841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.918871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.919205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.919588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.919618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.919980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.920011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.920384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.920419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.920807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.236 [2024-11-26 20:07:54.921178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.236 [2024-11-26 20:07:54.921210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.236 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.921552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.921583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.921927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.921958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.922202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.922233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.922602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.922632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.922992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.923023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.923386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.923417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.923778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.923808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.924065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.924093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.924511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.924541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.924905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.924935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.925294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.925324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.925685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.925714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.926084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.926115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.926559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.926589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.926832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.926860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.927223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.927255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.927631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.927661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.928016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.928046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.932193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.932257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.932562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.932595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.932967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.932999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.933240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.933269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.933673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.933704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.934089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.934118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.934515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.934549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.934942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.934974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.935317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.935348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.935719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.935749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.936213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.936244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.936593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.936623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.937040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.937071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.937393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.937804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.937834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.938199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.938228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.938593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.938619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.938989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.939012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.939418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.939444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.237 qpair failed and we were unable to recover it. 00:29:54.237 [2024-11-26 20:07:54.939809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.237 [2024-11-26 20:07:54.939844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.940223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.940654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.940690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.941063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.941095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.941473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.941504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.941752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.941782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.942175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.942209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.942567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.942596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.942969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.943041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.943405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.943437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.944342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.944383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.944656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.944688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.945098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.945129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.945534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.945561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.945920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.945943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.946295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.946321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.946675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.946699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.947063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.947085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.947431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.947455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.947705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.947729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.948087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.948111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.948487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.948508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.948839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.948866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.949226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.949246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.949482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.949501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.949835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.949855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.950213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.950234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.950708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.950727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.951079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.951101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.951421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.951441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.951778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.951797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.952198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.952219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.952596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.952616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.952941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.952962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.953283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.953304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.238 [2024-11-26 20:07:54.953637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.238 [2024-11-26 20:07:54.953657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.238 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.953991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.954010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.954395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.954416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.954739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.954759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.954975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.954998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.955379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.955399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.955789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.956185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.956206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.956538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.956558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.956899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.956919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.957241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.957263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.957607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.957626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.957947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.957968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.958326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.958347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.958687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.958706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.958915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.958936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.959300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.959321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.959662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.959682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.960045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.960065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.960382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.960412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.960776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.960806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.961178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.961208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.961502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.961531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.961887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.962261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.962291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.962660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.962689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.963055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.963447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.963476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.963851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.964220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.964251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.964637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.964666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.965028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.965057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.965389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.965424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.965774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.965809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.966179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.966210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.966576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.966606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.966948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.966977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.967342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.967373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.967735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.967763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.239 qpair failed and we were unable to recover it. 00:29:54.239 [2024-11-26 20:07:54.968130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.239 [2024-11-26 20:07:54.968170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.968536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.968565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.968915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.968943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.969344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.969697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.969725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.970122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.970527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.970556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.970929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.970957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.971298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.971327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.971696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.971725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.972073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.972101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.972478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.972507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.972869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.972898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.973147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.973189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.973442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.973472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.973859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.973889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.974251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.974284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.974648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.974925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.975311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.975345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.975701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.975731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.976089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.976119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.976481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.976515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.976867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.976897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.977259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.977290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.977657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.977689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.978044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.978074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.978431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.978822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.978853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.979220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.979251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.979492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.979523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.979873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.979904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.980258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.980289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.980651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.980682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.981112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.981144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.981527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.981565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.981921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.981951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.982300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.982333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.982572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.240 [2024-11-26 20:07:54.982603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.240 qpair failed and we were unable to recover it. 00:29:54.240 [2024-11-26 20:07:54.982957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.982988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.983224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.983256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.983629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.983659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.984017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.984048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.984417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.984447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.984683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.984718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.985072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.985104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.985537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.985899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.985929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.986260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.986654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.986685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.987042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.987073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.987433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.987468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.987730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.988109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.988140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.988518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.988549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.988904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.988934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.989295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.989645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.989913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.989944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.990294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.990327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.990689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.990719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.991080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.991108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.991418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.991456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.991815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.991844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.992198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.992229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.992646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.993017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.993046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.993403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.993434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.993776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.993805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.994172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.994203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.994530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.994559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.994899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.994927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.995293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.995324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.995694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.995723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.996086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.996115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.996400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.996434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.996814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.241 [2024-11-26 20:07:54.997204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.241 [2024-11-26 20:07:54.997238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.241 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.997602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.997631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.997994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.998024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.998465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.998495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.998850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.999241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.999272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:54.999653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:54.999682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.000044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.000074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.000449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.000488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.000853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.000881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.001260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.001291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.001662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.001690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.002055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.002084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.002373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.002404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.002754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.002783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.003148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.003188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.003539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.003569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.003927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.003958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.004300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.004330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.004580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.004608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.004955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.004983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.005349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.005380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.005730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.005759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.006138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.006180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.006560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.006589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.006961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.006989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.007369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.007405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.007743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.007773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.008145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.008182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.008522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.008552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.008929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.008958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.009321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.009353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.009706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.009735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.010181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.010212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.242 [2024-11-26 20:07:55.010608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.242 [2024-11-26 20:07:55.010637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.242 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.010986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.011015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.011384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.011413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.011785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.011814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.012179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.012208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.012559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.012587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.012882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.012910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.013283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.013314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.013654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.014045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.014074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.014426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.014457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.014816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.014845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.015095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.015125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.015474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.015506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.015866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.015896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.016233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.016263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.016612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.016640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.017011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.017040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.017382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.017412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.017759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.017800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.018156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.018197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.018542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.018571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.018926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.018954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.019317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.019347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.019703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.019732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.020092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.020120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.020360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.020393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.020814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.020844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.021213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.021549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.021578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.021936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.021965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.022334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.022364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.022707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.022736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.023097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.023128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.023505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.023536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.243 qpair failed and we were unable to recover it. 00:29:54.243 [2024-11-26 20:07:55.023897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.243 [2024-11-26 20:07:55.023926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.024335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.024691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.024720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.025084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.025114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.025482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.025513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.025881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.025909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.026350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.026382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.026735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.026764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.027124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.027153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.027518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.027548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.027916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.027945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.028312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.028342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.028704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.028733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.029070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.029099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.029457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.029489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.029750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.029778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.244 [2024-11-26 20:07:55.030152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.244 [2024-11-26 20:07:55.030197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.244 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 20:07:55.030542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 20:07:55.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 20:07:55.030935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 20:07:55.030967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.031336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.031366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.031723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.031752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.032110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.032142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.032524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.032915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.032943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.033306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.033337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.033712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.033746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.034115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.034144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.034522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.034553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.034906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.034934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.035295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.035326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.035762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.035790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.036145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.036185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.036524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.036553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.036900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.036929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.037299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.037329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.037686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.037715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.038073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.038102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.038383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.038752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.038781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.039171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.039202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.039578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.039970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.039999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.040455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.040485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.040819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.040847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.041207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.041236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.041579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.041608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.041964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.041992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.042339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.042368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.042718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.042747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.043107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.043135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.043556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.043586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.043940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.043970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 20:07:55.044311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 20:07:55.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.044712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.044742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.045093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.045121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.045368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.045402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.045740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.045770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.046184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.046215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.046569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.046598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.046953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.046982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.047338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.047368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.047733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.047761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.048118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.048146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.048451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.048847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.048876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.049120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.049149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.049508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.049539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.049750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.049782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.050171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.050203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.050539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.050568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.050845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.050874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.051221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.051252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.051652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.052022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.052050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.052412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.052442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.052818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.052847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.053203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.053586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.053616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.053979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.054009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.054390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.054421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.054778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.054808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.055212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.055242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.056003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.056032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.056404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.056434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.056828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.056856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.057218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.057248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.057619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.057647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.058018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.058380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.058411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 20:07:55.058772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 20:07:55.058801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.059186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.059217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.059597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.059626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.059986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.060020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.060354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.060384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.060746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.060775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.061132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.061181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.061545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.061575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.061916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.061946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.062214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.062246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.062587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.062615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.062980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.063008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.063388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.063420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.063763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.063792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.064171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.064202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.064565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.064594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.064843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.064871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.065310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.065341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.065699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.065727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.066124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.066152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.066511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.066540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.066820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.067258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.067289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.067637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.067667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.068034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.068062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.068410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.068441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.068818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.068848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.069215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.069610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.069639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.069880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.069913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.070257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.070288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.070626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.070655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.071031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.071059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.071416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.071445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.071782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.071811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.072177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.072207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.072552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.072581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.072986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.073015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 20:07:55.073341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 20:07:55.073373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.073621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.073650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.074000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.074029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.074441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.074472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.074826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.074856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.075195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.075227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.075481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.075510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.075836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.075864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.076214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.076244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.076618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.076647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.077005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.077286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.077316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.077681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.077711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.078079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.078108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.078477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.078508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.078860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.078890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.079250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.079281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.079707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.079736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.080114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.080498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.080528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.080940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.080969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.081309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.081340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.081713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.081741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.082085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.082494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.082525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.082863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.082894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.083271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.083655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.083683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.084054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.084084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.084447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.084477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.084867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.085214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.085246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.085595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.085990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.086026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.086352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.086382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.086720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.086749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.087116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.087145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.087400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.087429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.087776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 20:07:55.087805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 20:07:55.088256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.088286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.088635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.088664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.089027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.089055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.089396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.089428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.089792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.090194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.090224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.090560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.090588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.090959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.090988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.091369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.091727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.091755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.092113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.092142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.092518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.092548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.092912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.092941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.093305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.093696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.093725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.094096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.094126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.094525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.094912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.094941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.095305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.095334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.095689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.095718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.096110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.096478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.096509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.096915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.096944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.097190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.097222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.097589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.097617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.097975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.098004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.098378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.098408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.098778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.098807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.099064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.099092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.099490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.099520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.099889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.099918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 20:07:55.100285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 20:07:55.100315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.100682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.101069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.101099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.101458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.101489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.101848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.101883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.102237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.102269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.102644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.102673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.103039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.103423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.103454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.103814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.103843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.104207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.104237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.104596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.104624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.104984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.105013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.105380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.105419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.105763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.105791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.106153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.106196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.106550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.106579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.106957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.106986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.107344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.107376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.107743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.107771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.108107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.108136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.108490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.108520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.108885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.108914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.109270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.109300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.109652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.110046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.110076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.110440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.110470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.110839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.110868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.111215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.111246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.111605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.111635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.112003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.112031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.112390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.112426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.112783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.112812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.113187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.113217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.113575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.113604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.113962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.113991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.114333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.114363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.114723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.114754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.115112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 20:07:55.115142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 20:07:55.115513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.115543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.115797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.115829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.116193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.116224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.116588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.116618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.116976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.117005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.117373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.117402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.117766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.117795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.118157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.118197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.118460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.118492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.118872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.118901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.119259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.119290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.119660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.119688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.120051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.120080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.120437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.120468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.120835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.120864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.121222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.121254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.121624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.121652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.122023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.122052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.122414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.122801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.122830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.123200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.123230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.123498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.123530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.123880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.123909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.124255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.124287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.124663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.125019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.125048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.125411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.125440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.125806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.125835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.126204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.126235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.126621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.126649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.127027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.127401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.127432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.127791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.127821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.128183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.128614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.128644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.129019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.129048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.129288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.129321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.129702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 20:07:55.129731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 20:07:55.130097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.130432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.130462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.130831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.130860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.131233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.131264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.131672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.131701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.132068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.132098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.132504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.132534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.132931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.132962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.133324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.133356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.133718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.133749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.134113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.134143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.134494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.134526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.134861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.135186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.135217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.135601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.135631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.135987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.136018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.136387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.136420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.136773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.136803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.137155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.137212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.137590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.137622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.137976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.138006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.138299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.138332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.138694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.138732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.139073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.139315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.139349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.139732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.140112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.140143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.140404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.140434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.140792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.140821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.141183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.141215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.141582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.141612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.141980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.142011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.142283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.142316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.142532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.142563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.142915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.142947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.143313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.143345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.143708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.143739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.144085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.144116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.144570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 20:07:55.144603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 20:07:55.144953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.144982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.145336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.145367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.145736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.145766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.146122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.146153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.146445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.146476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.146827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.147220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.147252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.147696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.148073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.148104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.148443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.148474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.148835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.148865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.149098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.149131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.149531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.149562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.149927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.149957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.150228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.150580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.150610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.150969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.151001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.151392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.151425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.151770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.151800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.152154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.152208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.152574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.152603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.152974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.153003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.153335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.153367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.153740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.153769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.154179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.154216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.154575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.154606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.155022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.155403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.155435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.155782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.155813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.156179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.156209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.156652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.156684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.157051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.157081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.157546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.157894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.157922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.158281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.158312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.158685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.158715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.159065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.159096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.159464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 20:07:55.159495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 20:07:55.159855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.159886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.160655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.160995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.161025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.161381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.161412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.161788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.161819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.162184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.162216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.162472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.162502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.162832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.162861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.163216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.163687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.163716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.164073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.164104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.164471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.164503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.164902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.164937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.165188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.165569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.165598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.166010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.166041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.166401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.166438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.166777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.166809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.167183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.167214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.167573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.167602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.167963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.167993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.168348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.168379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.168734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.168763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.169121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.169149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.169517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.169546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.169903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.169933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.170192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.170224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.170565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.170593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.170928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.170959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.171299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.171331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.171711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.171741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.172103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.172425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 20:07:55.172456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 20:07:55.172747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.173137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.173183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.173563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.173592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.173953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.173983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.174388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.174420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.174842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.174874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.175252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.175655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.176013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.176045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.176408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.176440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.176778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.177181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.177212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.177607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.177968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.177998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.178245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.178275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.178560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.178589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.178954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.178984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.179289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.179654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.179685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.180047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.180080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.180445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.180482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.180820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.180851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.181185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.181215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.181550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.181580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.181923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.181952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.182316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.182347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.182750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.182781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.183137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.183179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.183422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.183453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.183831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.184221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.184253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.184608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.184637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.185076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.185106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.185503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.185915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.185946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.186301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.186688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.186717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 20:07:55.186947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 20:07:55.186976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.187342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.187375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.187720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.187759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.188103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.188133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.188406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.188437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.188790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.188819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.189188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.189219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.189570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.189600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.189952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.189981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.190351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.190383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.190764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.190793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.191144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.191184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.191589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.191618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.191980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.192009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.192384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.192415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.192848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.192877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.193239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.193270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.193641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.193671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.193925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.193953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.194300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.194331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.194692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.194962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.194991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.195356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.195386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.195733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.195762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.196104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.196134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.196521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.196551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 20:07:55.196897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 20:07:55.196929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.197185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.197217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.197596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.197626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.197997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.198027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.198394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.198425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.198831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.199193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.199222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.199579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.199608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.199968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.199997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.200340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.200369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.200734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.200762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.201133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.201178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.201573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.201602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.201835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.201868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.202215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.202247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.202615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.202644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.203015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.203044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.203423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.203454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.203824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.203853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.204215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.204245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.204499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.204527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.204873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.204902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.205144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.205183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.205538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.205566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.205941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 20:07:55.205970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 20:07:55.206222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.206261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.206638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.206667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.207031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.207061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.207410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.207440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.207801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.207831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.208082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.208112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.208538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.208568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.208941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.208971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.209319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.209349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.209709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.210102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.210130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.210574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.210927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.210955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.211305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.211336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.211707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.211737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.212096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.212124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.212496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.212526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.212891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.212920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.213290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.213321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.213681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.213709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.214072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.214100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.214466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.214873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.214902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.215232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.215263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 20:07:55.215628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 20:07:55.215657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.216025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.216053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.216408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.216437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.216815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.216844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.217223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.217254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.217510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.217538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.217924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.217954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.218187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.218217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.218576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.218604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.218984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.219013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.219265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.219296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.219695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.220075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.220103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.220578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.220965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.221326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.221356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.221719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.221748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.222106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.222494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.222523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.222867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.222896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.223281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.223312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.223676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.223705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.224064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.224093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.224517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.224548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.224983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.225012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.530 qpair failed and we were unable to recover it. 00:29:54.530 [2024-11-26 20:07:55.225255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.530 [2024-11-26 20:07:55.225288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.225657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.226059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.226089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.226457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.226487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.226845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.226875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.227134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.227176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.227559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.227589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.227949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.227978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.228240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.228271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.228643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.228672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.229035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.229064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.229422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.229452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.229837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.229866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.230123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.230152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.230512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.230542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.230901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.230930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.231276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.231305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.231661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.231690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.232052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.232081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.232449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.232486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.232828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.531 [2024-11-26 20:07:55.232858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.531 qpair failed and we were unable to recover it. 00:29:54.531 [2024-11-26 20:07:55.233196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.233226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.233596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.233625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.233982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.234012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.234390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.234419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.234683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.235095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.235124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.235487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.235517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.235777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.235806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.236168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.236200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.236547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.236575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.236931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.236960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.237326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.237356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.237713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.237742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.238009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.238038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.238400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.238430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.238778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.239140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.239178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.239579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.239607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.239946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.239975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.240324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.240353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.240718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.240746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.241087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.241115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.241473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.241503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.241865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.241893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.242260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.242291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.242630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.532 [2024-11-26 20:07:55.242658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.532 qpair failed and we were unable to recover it. 00:29:54.532 [2024-11-26 20:07:55.243027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.243056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.243412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.243441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.243856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.244226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.244257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.244628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.244657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.245015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.245044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.245406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.245436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.245794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.245823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.246190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.246219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.246628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.246873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.246902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.247241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.247538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.247567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.247922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.247956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.248322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.248353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.248707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.248736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.249099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.249128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.249523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.249553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.249927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.249955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.250302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.250333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.250706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.250734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.250992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.251021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.251400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.251429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.251772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.251801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.252157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.252196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.252455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.252483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.252850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.252878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.253248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.253280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.253522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.253553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.253908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.253938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.254301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.254332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.254693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.533 [2024-11-26 20:07:55.254722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.533 qpair failed and we were unable to recover it. 00:29:54.533 [2024-11-26 20:07:55.255028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.255057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.255400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.255430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.255795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.255823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.256199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.256228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.256664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.256694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.257045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.257074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.257448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.257478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.257844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.257872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.258238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.258276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.258663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.258691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.259099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.259128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.259362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.259396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.259775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.259804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.260184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.260215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.260595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.260623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.261004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.261042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.261384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.261778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.261807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.262205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.262236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.262439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.262471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.262902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.262930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.263286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.263316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.263554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.263584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.263957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.264348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.264379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.264718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.264747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.265107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.265136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.265540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.265933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.265962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.266372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.266731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.266759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.267106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.267135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.267515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.267545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.267913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.267940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.268289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.268319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.268661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.268689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.269050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.269079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.269454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.534 [2024-11-26 20:07:55.269484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.534 qpair failed and we were unable to recover it. 00:29:54.534 [2024-11-26 20:07:55.269816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.269845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.270203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.270233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.270611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.270639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.270990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.271019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.271404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.271433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.271880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.271909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.272268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.272298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.272625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.272653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.272892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.272920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.273283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.273313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.273572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.273603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.274022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.274058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.274423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.274453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.274818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.274846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.275215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.275246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.275618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.275647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.276009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.276037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.276426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.276456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.276810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.276838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.277087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.277119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.277587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.277930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.277959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.278329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.278359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.278740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.278768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.279012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.279041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.279413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.279798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.279827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.280205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.280619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.280648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.280997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.281025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.281383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.281414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.281770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.281799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.282176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.282206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.282557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.282587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.282961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.282990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.283340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.283370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.283608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.283637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.283973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.284003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.535 qpair failed and we were unable to recover it. 00:29:54.535 [2024-11-26 20:07:55.284340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.535 [2024-11-26 20:07:55.284376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.284732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.284761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.285125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.285152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.285533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.285562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.285940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.285969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.286337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.286368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.286715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.286744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.287104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.287132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.287496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.287526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.287897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.287936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.288276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.288306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.288553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.288582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.288974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.289002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.289349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.289380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.289738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.289767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.290113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.290141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.290487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.290516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.290875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.290905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.291273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.291303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.291665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.291693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.292055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.292083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.292442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.292472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.292706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.293105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.293134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.293527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.293556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.293927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.293955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.294321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.294717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.294746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.295104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.295134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.295404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.295433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.295783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.295811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.296125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.296154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.296517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.296546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.536 [2024-11-26 20:07:55.296906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.536 [2024-11-26 20:07:55.296935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.536 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.297297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.297328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.297699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.297728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.298090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.298119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.298466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.298496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.298827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.298856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.299219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.299250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.299626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.299655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.300008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.300041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.300385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.300780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.301178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.301208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.301552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.301581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.301988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.302018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.302382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.302413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.302761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.302791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.303171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.303533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.303561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.303907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.303937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.304299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.304329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.304701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.304729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.305091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.305118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.305513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.305545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.305918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.305947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.306245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.306276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.306659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.306688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.307051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.307431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.307461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.307832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.307860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.308217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.308246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.308650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.308678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.309036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.309064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.309413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.309444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.309814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.309842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.310095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.310123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.310567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.310599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.310963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.310993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.311348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.311378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.311737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.537 [2024-11-26 20:07:55.311765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.537 qpair failed and we were unable to recover it. 00:29:54.537 [2024-11-26 20:07:55.312122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.312151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.312499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.312527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.312888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.312918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.313197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.313227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.313681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.313709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.314050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.314439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.314469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.314869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.314897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.315255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.315651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.315679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.316047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.316077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.316426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.316455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.316814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.316842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.317217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.317246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.317581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.317610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.317862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.317890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.318233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.318263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.318661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.318909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.318941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.319261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.319291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.319624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.319653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.320025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.320054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.320408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.320437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.320731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.320759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.321109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.321138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.321512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.321542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.538 [2024-11-26 20:07:55.321907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.538 [2024-11-26 20:07:55.321936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.538 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.322306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.322340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.322701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.322730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.323099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.323509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.323540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.323903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.323931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.324285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.324315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.324664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.324695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.325000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 20:07:55.325030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 20:07:55.325409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.325438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.325793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.325821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.326173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.326210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.326492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.326520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.326874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.326902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.327295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.327664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.327692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.328066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.328470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.328500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.328858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.328886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.329246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.329649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.329677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.330045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.330073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.330452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.330814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.330843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.331205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.331235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.331493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.331526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.331873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.331902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.332263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.332294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.332670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.332708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.333078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.333106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.333518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.333549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.334304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.334334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.334579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.334607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.334959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.334988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.335348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.335379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.335738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.335767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.336132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.336172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.336470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.336499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.336880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.336909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.337271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.337302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.337671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.337709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.338063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.338419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.338450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.338803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.338833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.339198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.339228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.339586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.339615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 20:07:55.339986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 20:07:55.340014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.340404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.340434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.340798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.340826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.341219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.341249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.341619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.341647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.342010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.342039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.342422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.342774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.343174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.343205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.343611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.343978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.344006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.344339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.344732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.344760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.345120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.345148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.345407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.345437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.345785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.346182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.346214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.346572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.346601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.346941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.346970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.347327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.347357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.347720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.347749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.348098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.348127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.348572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.348602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.348971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.349363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.349393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.349756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.349784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.350138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.350179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.350539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.350568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.351006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.351035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.351375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.351406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.351769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.351798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.352152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.352193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.352473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.352507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.352866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.352895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.353255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.353287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.353650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.353678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.354054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.354084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.354431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.354460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.354832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 20:07:55.354862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 20:07:55.355223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.355616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.355643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.355977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.356006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.356354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.356384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.356819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.356848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.357096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.357124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.357495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.357524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.357893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.357922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.358288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.358318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.358659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.358866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.358898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.359310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.359342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.359693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.359721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.360082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.360110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.360481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.360511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.360850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.360879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.361241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.361289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.361687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.362050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.362080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.362449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.362479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.362841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.362871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.363232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.363262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.363610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.363639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.364088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.364117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.364349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.364395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.364820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.365231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.365263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.365612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.365641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.366039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.366384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.366415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.366765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.366794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.367066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.367095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.367442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.367473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.367770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.367798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.368125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.368187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.368592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.368621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.368980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.369009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.369382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.369418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 20:07:55.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 20:07:55.370020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.370048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.370387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.370417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.370787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.370817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.371182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.371213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.371580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.371609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.371964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.371993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.372335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.372364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.372728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.372757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.373120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.373149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.373541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.373571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.373933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.373961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.374298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.374329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.374690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.374718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.375087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.375115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.375476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.375507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.375771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.375799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.376152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.376193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.376548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.376578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.376927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.376956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.377312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.377344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.377586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.377985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.378014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.378368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.378406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.378765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.378793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.379043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.379074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.379449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.379479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.379841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.379870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.380263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.380293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.380625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.380653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.381056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.381430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.381461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.381825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.382222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.382252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.382624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.383030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.383059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.383406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.383437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.383804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.384226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 20:07:55.384590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 20:07:55.384619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.384982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.385013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.385384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.385415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.385764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.385793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.386152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.386196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.386432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.386463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.386859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.386889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.387253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.387285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.387721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.387749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.388078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.388109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.388479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.388873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.388902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.389280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.389313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.389674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.389704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.390066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.390095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.390454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.390484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.390834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.391237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.391268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.391639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.391668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.391995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.392025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.392393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.392423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.392785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.392815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.393040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.393070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.393459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.393489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.393731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.393763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.394202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.394607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.394636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.394997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.395026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.395396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.395426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.395794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.395823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.396176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.396206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.396556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 20:07:55.396584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 20:07:55.396946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.396976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.397341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.397371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.397737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.397767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.398147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.398190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.398476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.398504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.398875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.399268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.399633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.399663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.399993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.400023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.400281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.400311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.400697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.400728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.400973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.401008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.401370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.401726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.401756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.402124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.402153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.402586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.402964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.402993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.403342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.403373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.403734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.403762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.404137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.404178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.404529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.404564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.404912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.404941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.405303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.405335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.405700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.405730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.405971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.406003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.406338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.406371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.406739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.406769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.407125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.407154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.407525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.407556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.407760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.407790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.408180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.408210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.408550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.408585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.408928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.409317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.409712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.409741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.410102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.410134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.410500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.410531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.410932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.410961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 20:07:55.411317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 20:07:55.411347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.411763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.411792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.412126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.412156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.412522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.412551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.412919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.412950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.413305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.413338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.413611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.414025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.414055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.414423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.414454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.414818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.414849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.415207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.415238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.415602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.415631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.415937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.415965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.416326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.416356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.416624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.416654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.417032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.417062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.417404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.417437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.417782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.417811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.418255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.418286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.418638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.418667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.419034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.419063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.419415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.419447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.419803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.420187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.420223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.420456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.420486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.420858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.420889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.421251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.421284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.421651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.421680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.422045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.422076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.422440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.422472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.422832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.422863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.423224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.423253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.423613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.423642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.423948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.423979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.424337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.424367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.424593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.424626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.425060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.425090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.425447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.425478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 20:07:55.425840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 20:07:55.425871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.426228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.426260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.426607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.426638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.426983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.427012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.427392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.427424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.427757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.427785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.428192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.428542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.428571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.428904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.428932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.429280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.429312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.429574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.429604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.429981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.430010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.430266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.430304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.430699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.430729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.431095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.431484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.431517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.431877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.431907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.432287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.432319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.432691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.432720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.433123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.433152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.433519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.433550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.433917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.433948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.434296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.434328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.434707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.434737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.435108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.435137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.435522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.435554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.435940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.435971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.436342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.436373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.437078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.437108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.437503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.437534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.437762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.437794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.438184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.438216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.438580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.438611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.438975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.439005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 20:07:55.439373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 20:07:55.439405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.439757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.439787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.440146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.440188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.440584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.440615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.440976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.441006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.441480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.441819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.441850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.442232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.442263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.442628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.442657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.443022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.443051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.443415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.443446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.443805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.443834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.444070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.444102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.444485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.444880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.444909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.445267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.445672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.446006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.446037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.446426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.446770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.446801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.447153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.447200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.447956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.447985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.448288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.448319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.448691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.448720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.449083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.449112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.449372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.449406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.449808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.449838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.450201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.450232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.450585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.450614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.450977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.451007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.451375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.451406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.451767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.451797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.452171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.452202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.452552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.452581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 20:07:55.452949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 20:07:55.452980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.453339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.453370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.453606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.453640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.453933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.453963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.454321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.454351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.454719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.454752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.455105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.455135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.455512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.455543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.455940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.455970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.456324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.456354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.456719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.456750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.457205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.457238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.457563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.457595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.457986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.458015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.458380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.458412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.458769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.458800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.459155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.459198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.459619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.459648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.460005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.460034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.460400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.460430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.460794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.460824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.461185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.461216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.461571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.461913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.461944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.462287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.462321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.462694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.462724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.463089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.463119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.463460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.463491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.463851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 20:07:55.464241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 20:07:55.464271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.464636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.464667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.465010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.465039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.465423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.465453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.465808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.465836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.466221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.466252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.466596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.466627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.467000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.467028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.467404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.467433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.467789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.467819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.468191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.468222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.468546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.468575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.468880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.468910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.469292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.469323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.469591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.469990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.470244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.470275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.470710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.470739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.471090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.471121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.471489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.471520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.471808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.471836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.472192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.472222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.472647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.473007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.473037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.473382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.473411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.473754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.473783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.474127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.474156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.474521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.474550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.474912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.474942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.475325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.475356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.475773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.475804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.476212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.476552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.476581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.476940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.476968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.477330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.477358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.477703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 20:07:55.477732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 20:07:55.478102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.478131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.478503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.478534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.478904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.478932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.479307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.479337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.479691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.479720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.480101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.480342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.480374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.480733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.480762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.481127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.481156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.481512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.481541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.481901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.481931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.482300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.482330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.482704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.483078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.483107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.483525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.483556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.483898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.483927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.484280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.484311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.484675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.484703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.484944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.484977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.485332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.485364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.485765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.486123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.486153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.486569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.486945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.486974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.487335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.487365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.487756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.488114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.488143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.488502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.488556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.488911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.488941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.489278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.489309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.489674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.489702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.490068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.490098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.490343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.490377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.490710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.490741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 20:07:55.491120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 20:07:55.491151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.491541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.491571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.491940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.491970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.492329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.492358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.492717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.492746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.493119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.493148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.493497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.493526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.493874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.493904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.494266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.494297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.494659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.494689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.495049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.495078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.495445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.495475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.495837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.495866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.496242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.496273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.496588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.496616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.496975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.497269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.497301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.497678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.497707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.498069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.498098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.498455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.498484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.498842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.498878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.499274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.499304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.499668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.499696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.500068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.500097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.500458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.500488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.500868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.500897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.501136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.501180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.501538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.501567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.501916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.501944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.502316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.502348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.502652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.502681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.503038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.503067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.503411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.503441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.503842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.503871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.504231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.504264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.504632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 20:07:55.504660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 20:07:55.505029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.505058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.505421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.505451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.505811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.505841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.506217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.506247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.506536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.506567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.506919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.506949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.507216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.507245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.507595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.507624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.507992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.508022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.508423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.508781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.508809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.509213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.509595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.509624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.509987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.510019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.510404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.510436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.510783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.510815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.511154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.511196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.511591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.511620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.511888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.511917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.512266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.512296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.512700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.513085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.513114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.513500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.513895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.513923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.514281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.514312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.514568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.514603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.514985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.515013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.515417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.515448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.515805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.515833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.516201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.516231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.516615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.516644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.517014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.517044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.517379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.517409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.517771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.517800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 20:07:55.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 20:07:55.518201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.518558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.518587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.518949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.519350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.519761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.519789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.520156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.520211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.520578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.520607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.520852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.520884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.521251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.521282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.521643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.521672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.522032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.522061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.522426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.522456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.522829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.522858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.523222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.523255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.523622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.523651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.524024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.524053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.524436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.524466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.524824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.524853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.525210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.525246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.525613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.525642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.526008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.526037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.526380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.526411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.526747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.526777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.527121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.527149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.527502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.527532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.527904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.527933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.528309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.528341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.528708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.528737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.529083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.529114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.529347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.529376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.529736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.529765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.530115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.530145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.530519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-11-26 20:07:55.530549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-11-26 20:07:55.530913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.530943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.531300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.531331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.531685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.531713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.532149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.532423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.532456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.532837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.532866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.533122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.533151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.533553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.533583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.533935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.533964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.534322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.534352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.534707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.534736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.535098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.535127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.535512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.535883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.535913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.536282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.536313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.536600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.536629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.536983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.537012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.537369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.537398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.537769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.537797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.538150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.538206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.538559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.538589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.538951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.538980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.539227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.539260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.539646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.539675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.540032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.540822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.540857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.541110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.541139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.541508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.541537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.541896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.541926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.542293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.542324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.542675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.542704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.543054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.543084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.543330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.543360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.543709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.544093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.544122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.544405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.544434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.544843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.826 qpair failed and we were unable to recover it. 00:29:54.826 [2024-11-26 20:07:55.545185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.826 [2024-11-26 20:07:55.545217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.545620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.545982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.546013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.546383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.546414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.546774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.546806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.547182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.547539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.547569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.547943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.547971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.548342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.548373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.548737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.549097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.549126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.549498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.549527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.549915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.550295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.550324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.550693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.551089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.551125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.551499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.551530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.551888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.552280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.552312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.552694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.552722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.553086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.553114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.553465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.553495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.553800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.553830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.554178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.554208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.554540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.554570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.554934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.554962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.555338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.555368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.555806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.555833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.556185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.556216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.556558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.556588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.556948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.556976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.557337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.557366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.557735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.557764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.558198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.558228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.558503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.558531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.558866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.558894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.559255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.559285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.559688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.559717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.560054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.827 [2024-11-26 20:07:55.560085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.827 qpair failed and we were unable to recover it. 00:29:54.827 [2024-11-26 20:07:55.560435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.560466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.560824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.560853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.561109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.561138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.561516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.561547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.561901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.561930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.562319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.562349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.562690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.563054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.563083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.563457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.563487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.563870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.564216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.564247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.564613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.564642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.565002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.565032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.565391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.565422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.565801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.565830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.566194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.566230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.566612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.566641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.567008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.567049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.567406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.567445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.567786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.568181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.568212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.568561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.568594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.568962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.568993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.569351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.569383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.569733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.569762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.570110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.570139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.570520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.570551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.570909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.570940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.571336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.571704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.571733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.572099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.572126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.572448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.572480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.572838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.572866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.573228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.573257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.573640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.573669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.573978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.574009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.574371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.574401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.574750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.574779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.828 qpair failed and we were unable to recover it. 00:29:54.828 [2024-11-26 20:07:55.575145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.828 [2024-11-26 20:07:55.575190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.575468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.575500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.575752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.575790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.576173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.576206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.576545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.576575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.576944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.577286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.577317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.577701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.577730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.578087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.578701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.578739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.579094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.579130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.579565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.579596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.579968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.580003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.580425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.580457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.580795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.580826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.581189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.581222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.581600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.581630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.581975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.582005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.582341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.582371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.582738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.582766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.583137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.583180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.583528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.583558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.583923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.583951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.584314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.584346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.584725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.584754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.585080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.585108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.585466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.585497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.585836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.585867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.586239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.586272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.586635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.586665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.587025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.587056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.587415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.587445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.587781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.587812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.588178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.588209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.829 qpair failed and we were unable to recover it. 00:29:54.829 [2024-11-26 20:07:55.588555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.829 [2024-11-26 20:07:55.588585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.588950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.588979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.589337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.589369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.589712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.589741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.590104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.590133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.590497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.590527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.590898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.590926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.591298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.591330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.591697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.591726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.592087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.592116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.592397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.592782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.592813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.593172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.593204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.593542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.593579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.593943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.593972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.594326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.594358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.594721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.594749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.595099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.595129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.595481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.595512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.595852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.595880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.596239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.596269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.596611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.596640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.596990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.597018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.597411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.597443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.597805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.597835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.598198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.598228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.598574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.598603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.598902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.598931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.599306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.599671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.599701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.600059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.600087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.600459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.600490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.600836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.600865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.601286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.601316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.601563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.601594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.602008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.602365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.602395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.602714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.602746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.830 qpair failed and we were unable to recover it. 00:29:54.830 [2024-11-26 20:07:55.603131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.830 [2024-11-26 20:07:55.603171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.603557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.603585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.603954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.603984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.604337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.604369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.604781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.604811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.605154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.605196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.605551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.605581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.606004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.606033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.606422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.606452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.606819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.606847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.607190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.607221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.607556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.607586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.607947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.607975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.608292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.608324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.608696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.608726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.609089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.609494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.609525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.609877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.609906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.610279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.610310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.610642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.610671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.611017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.611046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.611418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.611447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.611795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.611824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.612176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.612207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.612537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.612566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.612932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.612962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.613337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.613368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.613696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.613725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.614077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.614106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.614457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.614487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.614869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.615222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.615253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.615608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.615636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.615873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.615902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.616248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:54.831 [2024-11-26 20:07:55.616644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.831 [2024-11-26 20:07:55.616673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:54.831 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.617038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.617070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.617426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.617457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.617822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.617852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.618094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.618122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.618460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.618489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.618848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.618878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.619252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.619283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.619551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.619586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 20:07:55.619999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 20:07:55.620341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.620373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.620735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.620764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.621182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.621213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.621601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.621630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.621976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.622005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.622350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.622381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.622744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.622774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.623130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.623170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.623525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.623554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.623917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.623946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.624319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.624348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.624711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.624741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.625105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.625137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.625560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.625590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.625934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.625963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.626327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.626358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.626713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.626742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.627145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.627186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.627490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.627520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.627879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.628343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.628373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.628711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.629114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.629143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.629545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.629573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.629914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.629944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.630297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.630688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.630719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.631077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.631106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.631484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.631515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.631895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.632248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.632279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.632646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.632675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.633035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.633064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.633418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.633448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.633847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.633878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.634207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.634239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.636697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.636771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.637211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 20:07:55.637252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 20:07:55.637605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.637900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.637943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.638286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.638320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.638668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.638698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.638937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.638966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.639326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.639358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.639714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.639743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.640110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.640140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.640506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.640536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.640909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.640940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.641205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.641240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.641687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.641717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.642090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.642120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.642480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.642510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.642867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.642897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.643257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.643291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.643654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.643686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.644046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.644076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.644448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.644479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.644833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.645238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.645268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.645551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.645581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.645954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.645984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.646315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.646348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.646695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.646724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.647092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.647125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.647400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.647695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.648093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.648129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.648600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.648633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.648983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.649013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.649387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.649420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.649788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.649818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.650192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.650224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.650588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.650618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.650986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.651016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.651448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.651479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.651765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.651795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 20:07:55.652147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 20:07:55.652190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.652565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.652597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.653001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.653031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.653443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.653474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.653847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.653877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.654217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.654247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.654604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.654633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.654994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.655024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.655384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.655416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.655782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.655811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.656211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.656571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.656602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.656860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.656890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.657244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.657276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.657638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.657669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.658040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.658070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.658465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.658496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.658759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.658788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.659179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.659209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.659664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.659693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.660058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.660089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.660444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.660474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.660867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.661235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.661268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.661627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.661658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.661899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.661931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.662326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.662680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.662712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.663078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.663106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.663505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.663537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.663905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.663944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.664294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.664332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.664732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.664761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.665132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.665176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.665555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.665584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.666012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.666042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.666488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.666520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.666856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.666887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.667242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 20:07:55.667272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 20:07:55.667644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.667674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.668011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.668040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.668454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.668486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.668925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.668956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.669321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.669351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.669692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.670123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.670155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.670598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.670629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.670997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.671028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.671440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.671473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.671827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.671858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.672068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.672101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.672516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.672550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.672887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.672918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.673308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.673344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.673714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.673744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.673993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.674022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.674280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.674312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.674670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.674701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.675044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.675081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.675432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.675465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.675852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.675883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.676293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.676325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.676733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.676763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.677138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.677181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.677563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.677592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.677956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.677987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.678445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.678476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.678846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.678876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.679062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.679093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.679461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.679493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.679861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.679890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.680243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.680273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.680635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.681033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.681063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.681490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.681521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.681968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.681998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.682401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.682431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 20:07:55.682818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 20:07:55.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.683094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.683123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.683453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.683485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.683837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.683870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.684239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.684271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.684728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.684759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.685048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.685080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.685444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.685475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.685842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.685874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.686262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.686292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.686658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.686691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.687042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.687073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.687455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.687486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.687775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.687805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.688204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.688237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.688586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.688616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.688933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.688962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.689325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.689357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.689688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.689717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.690099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.690128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.690513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.690543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.690938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.690966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.691271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.691306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.691704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.691732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.692094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.692122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.692518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.692548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.692905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.692935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.693327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.693358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.693727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.693755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.694180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.694210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.694610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 20:07:55.694980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 20:07:55.695008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.695181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.695213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.695561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.695590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.695950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.695978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.696387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.696418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.696761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.696792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.697175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.697207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.697514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.697863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.697892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.698265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.698297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.698712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.699082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.699110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.699386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.699416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.699784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.699814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.700082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.700111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.700476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.700895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.700923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.701214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.701244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.701680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.701717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.701975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.702003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.702477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.702508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.702845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.702874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.703241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.703271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.703637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.703666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.703961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.703989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.704395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.704425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.704819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.705204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.705235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.705623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.705651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.706024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.706053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.706424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.706454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.706795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.706824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.707190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.707222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.707606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.707634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.708004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.708034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.708308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.708339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.708546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.708578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.708952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.708982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.709336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.709367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 20:07:55.709728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 20:07:55.709757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.710122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.710151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.710564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.710920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.710949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.711317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.711348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.711726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.711755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.712113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.712143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.712575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.712605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.713010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.713039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.713427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.713819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.713849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.714222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.714254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.714607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.714637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.714990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.715399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.715430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.715815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.715843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.716222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.716254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.716611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.716640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.716988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.717017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.717301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.717331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.717679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.717714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.718076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.718106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.718478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.718509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.718875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.719237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.719269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.719628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.719658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.720049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.720427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.720784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.720814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.721075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.721104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.721398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.721428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.721798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.722187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.722218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.722483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.722512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.722889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.722919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.723291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.723321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.723702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.723731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.724068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.724096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 20:07:55.724464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 20:07:55.724494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.724770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.725132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.725174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.725535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.725924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.725954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.726216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.726247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.726595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.726624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.726992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.727022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.727378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.727409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.727749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.728173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.728204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.728566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.728595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.728958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.728986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.729338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.729369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.729739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.729769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.730127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.730157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.730538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.730567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.730932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.730961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.731342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.731808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.731838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.732206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.732237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.732652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.732681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.733042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.733072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.733334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.733364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.733733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.733761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.734125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.734155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.734531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.734561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.734930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.734959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.735367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.735398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.735659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.736006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.736036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.736304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.736335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.736692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.736722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.737123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.737151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 20:07:55.737555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 20:07:55.737585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.737937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.738222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.738252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.738598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.738628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.739009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.739038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.739388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.739419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.739779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.739808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.740183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.740213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.740568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.740596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.740859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.741198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.741629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.741659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.742016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.742045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.742417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.742454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.742816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.743184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.743216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.743557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.743593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.743933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.743963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.744366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.744396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.744780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.745153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.745197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.745560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.745966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.745994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.746367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.746397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.746766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.746795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.747173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.747203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.747547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.747577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.747942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.747970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.748327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.748357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.748716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.748745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.749113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.749143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 20:07:55.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 20:07:55.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.749932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.750322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.750664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.751051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.751080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.751441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.751471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.751873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.751902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.752266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.752298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.752676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.752705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.753062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.753090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.753452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.753482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.753857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.753886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.754256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.754286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.754650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.754680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.755066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.755434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.755463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.755834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.755864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.756220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.756251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.756632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.756665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.756887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.756916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.757285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.757695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.757723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.758084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.758114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.758497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.758527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 20:07:55.758892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 20:07:55.758922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.759214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.759560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.759595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.759954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.759983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.760240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.760270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.760643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.760671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.761033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.761063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.761435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.761464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.761789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.761817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.762180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.762210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.762560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.762588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.762940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.762971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.763309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.763708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.763737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.764100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.764130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.764477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.764507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.764879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.764909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.765326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.765356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.765708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.765737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.766094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.766505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.766536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.766882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.767278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.767307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.767669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.767701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.767959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.767992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.768273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.768303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.768671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.768699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.769064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.769093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.769438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.769467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.769823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.769860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 20:07:55.770207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 20:07:55.770246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.770633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.770662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.771019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.771049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.771412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.771442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.771808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.771837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.772198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.772227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.772617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.773002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.773359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.773638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.773670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.774037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.774066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.774461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.774493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.774866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.774895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.775249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.775280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.775636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.775665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.776012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.776040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.776384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.776416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.776770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.777175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.777206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.777551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.777580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.777917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.777945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.778298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.778329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.778677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.778706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.779068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.779098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.779452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.779483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.779829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.779857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.780224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.780255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.780613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.780643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.781012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.781043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.781417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.781447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.781810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.781839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.782201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 20:07:55.782232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 20:07:55.782580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.782609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.782970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.782999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.783254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.783288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.783548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.783577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.783866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.783895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.784254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.784285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.784555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.784587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.784974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.785340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.785377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.785708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.785736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.786101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.786510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.786548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.786878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.786907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.787270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.787300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.787674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.788091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.788453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.788483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.788826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.788855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.789217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.789613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.789640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.790010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.790039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.790376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.790408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.790782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.790811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.791190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.791222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.791573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.791602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.791960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.791990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.792358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.792388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.792841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.792871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.793215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.793252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.793621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.793649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 20:07:55.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 20:07:55.794031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.794387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.794418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.794775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.794803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.795183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.795214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.795550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.795580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.795925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.795961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.796354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.796385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.796749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.796777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.797139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.797547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.797577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.797935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.797964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.798325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.798355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.798699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.798728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.799103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.799133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.799537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.799569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.799928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.799957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.800314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.800345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.800756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.801049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.801077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.801443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.801474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.801811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.801841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.802098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.802128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.802508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.802540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.802900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.802928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.803307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.803336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.803713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.804120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.804148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.804523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.804552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.804880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.804909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.805315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.805345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.805698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.805728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.806100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 20:07:55.806129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 20:07:55.806366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.806399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.806756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.806786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.807144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.807185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.807464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.807823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.807851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.808215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.808246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.808705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.808735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.809091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.809120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.809490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.809521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.809881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.810362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.810393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.810739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.810769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.811123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.811152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.811460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.811489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.811852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.811887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.812243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.812275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.812649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.812677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.812928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.812958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.813287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.813317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.813719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.813748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.814099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.814129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.814513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.814543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.814989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.815019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.815395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.815427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.815804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.815840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.816170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.816200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.816559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.816589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.816875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.816903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.817284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.817728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 20:07:55.818129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 20:07:55.818175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.818549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.818578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.818797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.818825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.819176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.819208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.819608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.819970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.819998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.820222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.820252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.820600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.820629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.820991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.821020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.821399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.821430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.821799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.821828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.822198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.822234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.822624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.822652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.823029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.823057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.823412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.823811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.823840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.824202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.824233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.824615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.824643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.824978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.825006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.825371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.825401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.825771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.826199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.826230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.826604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.826634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.826988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.827016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.827398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.827429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 20:07:55.827794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 20:07:55.827824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.828182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.828213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.828591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.828621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.828986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.829015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.829394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.829424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.829765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.829794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.830157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.830199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.830562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.830592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.830957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.830986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.831357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.831388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.831746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.831775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.832150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.832524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.832554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.832917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.832948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.833316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.833347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.833722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.833751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.834142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.834576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.834606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.834953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.834983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.835342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.835373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.835734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.835763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.836131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.836170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.836540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.836569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.836968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.836996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.837346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.837377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.837746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.838126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.838155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.838532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.838568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.838998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.839026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.839395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.839425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 20:07:55.839767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 20:07:55.839796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.840035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.840068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.840479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.840510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.840848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.840878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.841240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.841272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.841641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.842032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.842062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.842483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.842514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.842853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.842882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.843283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.843562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.843591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.843947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.843976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.844348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.844709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.844738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.845111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.845142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.845542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.845573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.845929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.845957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.846319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.846351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.846720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.847146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.847200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.847464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.847492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.847850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.847879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.848240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.848271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.848628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.848657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.849021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.849051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.849387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.849802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.849831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.850143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.850187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.850564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.850594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.850959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.850987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.851385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 20:07:55.851417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 20:07:55.851759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.851788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.852080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.852108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.852468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.852498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.852876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.852907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.853338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.853369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.853727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.853755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.854099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.854127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.854545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.854575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.854808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.854840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.855191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.855222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.855607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.855636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.855999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.856028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.856394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.856425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.856855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.856885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.857115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.857146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.857565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.857595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.857914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.858189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.858602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.858632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.858996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.859024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.859381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.859411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.859755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.859786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.860144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.860188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.860549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.860579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.860952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.860981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.861337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.861725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.861754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.862105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.862133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.862523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.862553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 20:07:55.862901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 20:07:55.862930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.863291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.863322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.863705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.863735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.864095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.864126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.864534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.864566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.864923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.864961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.865193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.865224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.865624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.865653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.866011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.866040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.866409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.866439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.866802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.866832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.867194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.867227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.867606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.867634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.867985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.868013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.868382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.868414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.868765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.868794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.869180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.869213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.871076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.871142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.871607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.871644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.872041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.872412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.872451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.872790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.872820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.873183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.873214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.873573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.873613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.873839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.873869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.874107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.874139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.874515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.874546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.874903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.874932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.875200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.875231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.875446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 20:07:55.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 20:07:55.875836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.875864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.876237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.876269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.878181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.878242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.878649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.878685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.878952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.878983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.879272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.879303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.879655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.879686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.880020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.880049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.880395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.880428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.880718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.880746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.881104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.881134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.883359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.883429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.883868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.883905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.884360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.884399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.884744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.884777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.885143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.885190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.885497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.885875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.885904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.886262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.886294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.886676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.886706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.887075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.887104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.887507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.887798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.887829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.888187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.888218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.888563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.888593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.888836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.888865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.889153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.889199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 20:07:55.889561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 20:07:55.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.889951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.889980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.890348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.890381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.890742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.890772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.891147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.891264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.891619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.891651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.891987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.892017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.892400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.892431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.892790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.892821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.893236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.893266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.893625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.893656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.894035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.894065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.894523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.894556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.894900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.894931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.895296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.895327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.895714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.895744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.896097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.896133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.896406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.896438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.896811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.896840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.897222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.897254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.897641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.897670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.898028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.898059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.898409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.898440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.898804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.898835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.899305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.899338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.899681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.899712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.900145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 20:07:55.900223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 20:07:55.900432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.900463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.900835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.900866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.901265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.901680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.901709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.902072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.902104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.902390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.902421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.902789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.902819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.903189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.903221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.903596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.903627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.903967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.903997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.904264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.904296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.904690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.904719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.905083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.905114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.905481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.905513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.905821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.905853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.906227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.906257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.906521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.906550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.906912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.907212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.907243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.907521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.907553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.907929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.908097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.908131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.908514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.908547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.908914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.908945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.909222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.909253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.909642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.909671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.910019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.910048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.910469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.910501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 20:07:55.910879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 20:07:55.910911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.911298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.911332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.911702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.911739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.912102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.912133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.912449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.912481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.912843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.912873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.913143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.913439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.913470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.913865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.913895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.914173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.914204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.914585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.914615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.914979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.915011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.915368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.915399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.915765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.915798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.916144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.916186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.916587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.916618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.916895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.916925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.917297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.917329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.917681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.918072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.918103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.918493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.918525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.918778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.918811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.919230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.919264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.919628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.919657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.920037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.920069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.920437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.920469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-26 20:07:55.920715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-26 20:07:55.920744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.921181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.921213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.921578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.921607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.921950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.921989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.922408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.922438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.922801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.922830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.923207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.923240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.923622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.923652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.924028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.924058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.924500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.924530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.924884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.924917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.925064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.925098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.925514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.925749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.925780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.926061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.926093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.926539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.926902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.926932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.927222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.927256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.927644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.927981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.928010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.928309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.928340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.928703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.928734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.929095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.929128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.929530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.929925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.929959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.930356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.930386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.930658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.930687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.931045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.931075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.931337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.931706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.932083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.932114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.932586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.932618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.932992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.933022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.933475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.933506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.933860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.933890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.934283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.934314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.934663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.934693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.934955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.934987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.935417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.935448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-26 20:07:55.935793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-26 20:07:55.935822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.936227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.936258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.936512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.936544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.936896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.936928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.937323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.937726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.937761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.938123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.938155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.938439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.938469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.938856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.939300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.939332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.939717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.939751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.940184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.940216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.940482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.940511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.940764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.940794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.941019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.941048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.941559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.941590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.941969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.942000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.942270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.942301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.942570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.942599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.942999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.943031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.943425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.943457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.943830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.943859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.944211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.944243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.944622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.944652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.944938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.944967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.945319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.945349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.945722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.945753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.946127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.946538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.946569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.946917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.946946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.947382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.947412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.947757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.947786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.948153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.948204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.948496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.948525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.948892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.948921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.949201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.949599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.949628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.950001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.950030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.950285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-26 20:07:55.950315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-26 20:07:55.950685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.950714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.951085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.951113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.951429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.951458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.951809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.951838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.952204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.952236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.952598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.952989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.953018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.953361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.953391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.953729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.953758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.954122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.954150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.954593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.954623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.954994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.955022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.955384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.955414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.955788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.955817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.956186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.956216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.956611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.956641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.956991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.957019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.957392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.957426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.957771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.957800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.958139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.958183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.958531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.958561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.958917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.958946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.959295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.959327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.959708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.960070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.960099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.960491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.960850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.960880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.961136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.961192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.961647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.961677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.962068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.962096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.962475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.962504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.962870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.962899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.963304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.963335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.963687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.963716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.964083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.964119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.964526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.964555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.964912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.964940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.965213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.965244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 20:07:55.965505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 20:07:55.965533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.965854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.965883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.966291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.966321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.966706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.966736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.967101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.967131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.967397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.967427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.967580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.967612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.968040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.968071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.968407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.968437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.968800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.968830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.969186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.969217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.969573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.969603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.969973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.970298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.970328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.970704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.970733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.971115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.971360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.971391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.971779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.971808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.972184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.972214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.972592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.972620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.972960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.972989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.973351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.973380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.973741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.973770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.974119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.974154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.974587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.974874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.974903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.975236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.975267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.975625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.975654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.976015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.976043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.976408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.976439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.976807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.976836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.977198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.977227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.977573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 20:07:55.977602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 20:07:55.977964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.977993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.978337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.978367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.978718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.978747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.979109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.979139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.979547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.979580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.979946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.980340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.980371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.980731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.980759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.981136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.981178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.981538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.981567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.981916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.981945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.982294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.982325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.982724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.982753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.983102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.983508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.983538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.983903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.983931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.984183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.984213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.984571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.984882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.984911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.985292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.985323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.985677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.985708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.986094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.986475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.986506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.986867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.986895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.987249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.987279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.987648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.988040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.988068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.988433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.988463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.989138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.989180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.989585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.989944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.989979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.990332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.990363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.990725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.990754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.991126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.991154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.991527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.991556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.991962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.992206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.992239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 20:07:55.992567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 20:07:55.992596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.992949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.992979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.993354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.993385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.993773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.993803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.994218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.994470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.994502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.994897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.994927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.995295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.995327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.995694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.995724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.996080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.996110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.996480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.996510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.996876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.996905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.997345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.997376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.997728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.997757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.998123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.998152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.998526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.998555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.998899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.998928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.999297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.999328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:55.999598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:55.999626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.000017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.000047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.000287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.000317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.000683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.000713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.001102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.001557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.001588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.001958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.002384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.002413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.002738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.002768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.003136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.003185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.003437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.003466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.003842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.003871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.004252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.004284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.004636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.004666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.005024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.005053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.005404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.005435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.005767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.005796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.006152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.006544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.006574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.006941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.006970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.007338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 20:07:56.007369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 20:07:56.007726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.007754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.008121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.008151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.008530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.008560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.008917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.008945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.009303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.009333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.009697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.009726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.010095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.010124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.010400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.010433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.010772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.010801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.011171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.011202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.011554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.011582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.011949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.011977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.012342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.012373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.012726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.012755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.013120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.013149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.013590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.013621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.013975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.014003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.014390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.014420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.014787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.015152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.015203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.015545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.015574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.015931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.015960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.016327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.016364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.016652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.016680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.017033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.017062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.017317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.017347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.017736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.017765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.018138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.018184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.018528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.018558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.018918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.019300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.019331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.019706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.019737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.020091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.020121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.020522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.020554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.020920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.020948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.021205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.021235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.021605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.021634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.021996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 20:07:56.022339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 20:07:56.022369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.022730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.022759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.023121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.023149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.023501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.023531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.023881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.023909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.024280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.024310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.024666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.024696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.025077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.025106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.025472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.025501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.025738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.025770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.026114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.026143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.026537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.026569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.026938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.026969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.027200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.027230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.027606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.027634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.027993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.028021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.028363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.028393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.028764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.028792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.029152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.029194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.029436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.029465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.029740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.029770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.030134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.030173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.030425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.030828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.030857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.031114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.031142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.031511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.031546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.031953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.031983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.032342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.032372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.032817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.032846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.033187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.033218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.033575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.033604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.033955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.033983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.034243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.034274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.034635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.034664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.035013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.035041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.035386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.035415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.035765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.035794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.036156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 20:07:56.036201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 20:07:56.036581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.036611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.036977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.037339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.037370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.037707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.037736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.037987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.038017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.038391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.038420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.038812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.039196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.039226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.039674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.039703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.040057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.040086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.040451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.040479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.040852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.040880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.041245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.041630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.041660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.042009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.042044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.042385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.042415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.042774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.042803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.043177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.043207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.043576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.043604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.043972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.044002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.044357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.044751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.044779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.045124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.045153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.045499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.045894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.045923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.046298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.046330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.046672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.046702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.047061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.047090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.047422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.047453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.047816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.047845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.048189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.048219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.048587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.048617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.048974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 20:07:56.049003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 20:07:56.049265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.049294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.049657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.049687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.050051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.050079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.050470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.050877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.050907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.051238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.051269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.051627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.051655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.052024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.052052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.052422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.052452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.052803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.052834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.053195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.053225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.053613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.053641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.053776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.053808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.054085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.054114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.054505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.054535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.054798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.054826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.055198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.055230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.055580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.055610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.055974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.056003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.056298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.056329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.056729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.057198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.057228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.057624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.057659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.058003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.058033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.058387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.058417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.058773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.058802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.059181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.059210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.059562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.059591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.059962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.059993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.060351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.060382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.060731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.060760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.061124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.061153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.061504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.061534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.061900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.061929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.062377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.062408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.062775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.062804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.063142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.063195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.063541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.063569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 20:07:56.063931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 20:07:56.063959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.064319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.064350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.064725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.064975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.065002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.065382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.065778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.065807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.066179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.066208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.066548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.066578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.066936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.066967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.067342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.067748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.067777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.068129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.068177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.068542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.068571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.068930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.068959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.069323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.069355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.069728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.070064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.070094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.070358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.070388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.070766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.070794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.071148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.071189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.071543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.071573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.071947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.071976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.072340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.072777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.072806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.073157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.073200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.073544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.073574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.073953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.074341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.074371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.074709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.074738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.075099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.075127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.075539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.075571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.075896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.075927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.076212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.076243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.076602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.076630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.076996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.077024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.077394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.077424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.077681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.077712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.078092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.078122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.078544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 20:07:56.078577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 20:07:56.078923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.078953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.079322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.079690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.079719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.080080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.080108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.080476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.080507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.080874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.080903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.081262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.081733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.081762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.082088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.082542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.082574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.082914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.082943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.083290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.083321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.083687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.083716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.084085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.084120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.084485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.084863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.084892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.085265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.085297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.085649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.085678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.086037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.086066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.086328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.086357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.086619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.086648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.086995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.087025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.087402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.087433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.087796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.087826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.088193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.088626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.088661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.089028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.089056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.089430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.089460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.089822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.089853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.090207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.090237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.090614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.090642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.090987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.091016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.091376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.091406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.091760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.091789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.092175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.092206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.092557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.092586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.092949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.092977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 20:07:56.093264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 20:07:56.093622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.093651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.094008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.094038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.094392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.094428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.094752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.094783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.095139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.095182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.095565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.095894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.095923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.096290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.096320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.096664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.096694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.096937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.096968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.097319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.097349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.097736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.097765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.098141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.098183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.098538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.098824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.098853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.099207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.099238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.099608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.099639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.099994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.100024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.100290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.100320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.100734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.100762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.101107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.101138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.101518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.101548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.101916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.101944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.102310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.102339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.102702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.102730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.103098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.103126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.103510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.103547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.103900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.103930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.104326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.104587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.104615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.105003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.105031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.105415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.105445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.105702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.105730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.106080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.106110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.106528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.106559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.106715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.106747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.107105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.107134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.107527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 20:07:56.107557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 20:07:56.107907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.107936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.108295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.108327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.108724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.108753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.109118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.109146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.109498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.109527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.109932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.109969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.110374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.110405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.110761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.110791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.111048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.111079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.111460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.111491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.111852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.111880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.112252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.112283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.112630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.112659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.113030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.113059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.113399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.113430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.113790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.113819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.114186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.114215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.114581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.114609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.114862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.114893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.115257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.115289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.115661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.115690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.116054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.116082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.116448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.116479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.116843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.116871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.117224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.117254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.117594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.117624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.117979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.118008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.118385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.118416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.118775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.118803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.119173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.119556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.119585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.119972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.120001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.120248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.120279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 20:07:56.120675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 20:07:56.120705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3848037 Killed "${NVMF_APP[@]}" "$@" 00:29:55.415 [2024-11-26 20:07:56.121147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.121189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.121574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.121604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:55.416 [2024-11-26 20:07:56.121965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.121993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:55.416 [2024-11-26 20:07:56.122351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.122382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.416 [2024-11-26 20:07:56.122747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.416 [2024-11-26 20:07:56.122777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.416 [2024-11-26 20:07:56.123146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.123195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.123483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.123840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.123870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.124259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.124622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.124658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.125054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.125390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.125737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.125766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.126130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.126171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.126518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.126547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.126911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.126940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.127315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.127345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.127617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.127646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.128039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.128067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.128450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.128482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.128845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.128874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.129288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.129699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.129729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.130101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.130130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.130537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.130567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.130959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.130987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.131242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3848990 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3848990 00:29:55.416 [2024-11-26 20:07:56.131710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.131739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 20:07:56.131972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.132002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3848990 ']' 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.416 [2024-11-26 20:07:56.132394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.416 [2024-11-26 20:07:56.132793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.416 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.416 [2024-11-26 20:07:56.133181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 20:07:56.133213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.417 20:07:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.417 [2024-11-26 20:07:56.133574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.133606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.133937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.133968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.134210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.134244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.134540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.134569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.134853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.134885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.135248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.135281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.135612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.135642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.136081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.136117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.136381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.136414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.136791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.136822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.137066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.137097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.137334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.137367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.137724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.137756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.138038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.138342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.138375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.138759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.139127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.139173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.139581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.139612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.139974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.140005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.140432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.140464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.140889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.140919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.141282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.141314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.141687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.141719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.142084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.142114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.142553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.142583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.142941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.142971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.143333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.143366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.143631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.143663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.143895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.143927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.144277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.144311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.144691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.144721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.145067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.145099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.145443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.145475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.145842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.145874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.146230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.146528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.146559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.146920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.146950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.147322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 20:07:56.147354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 20:07:56.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.147834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.148202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.148235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.148461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.148490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.148848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.148878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.149122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.149157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.149329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.149750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.150109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.150138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.150571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.150967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.150997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.151355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.151387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.151657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.151686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.152041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.152071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.152429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.152460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.152820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.152851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.153207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.153238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.153625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.153662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.154032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.154061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.154451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.154483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.154844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.154875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.155226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.155524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.155553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.155983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.156013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.156499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.156530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.156924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.156955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.157263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.157294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.157650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.157680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.158070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.158432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.158463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.158839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.158870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.159267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.159551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.159582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.160022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.160051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.160378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.160408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.160778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.160808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.161201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.161233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.161590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 20:07:56.161620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 20:07:56.162028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.162328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.162618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.162651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.163031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.163063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.163318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.163351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.163714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.163753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.163993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.164022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.164429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.164794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.164825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.165187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.165219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.165585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.165614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.165985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.166017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.166404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.166435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.166887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.166917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.167344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.167377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.167668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.167700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.167939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.168351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.168381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.168773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.168806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.169190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.169593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.169622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.169803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.169832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.170183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.170218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.170376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.170416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.170825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.170856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.171268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.171301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.171679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.172090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.172120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.172499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.172532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.172779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.172811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.173177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.173211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.173583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.173613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.174043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.174074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.174437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 20:07:56.174470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 20:07:56.174862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.174894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.175197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.175229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.175590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.175620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.176026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.176292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.176326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.176684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.176713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.177065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.177095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.177481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.177512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.177883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.177914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.178191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.178225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.178625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.178654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.179043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.179076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.179540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.179577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.179827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.179858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.180230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.180263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.180646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.180684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.180923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.180956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.181369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.181620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.181649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.182035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.182067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.182430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.182462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.182853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.182885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.183274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.183304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.183678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.183710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.184091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.184122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.184526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.184559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.185005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.185036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.185407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.185849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.185883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.186125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.186173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.186473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.186938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.186970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.187292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.187713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.187743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.188056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.188087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.188558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.188588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.188961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.188990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.189361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.189394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.189779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 20:07:56.189808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 20:07:56.190005] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:29:55.421 [2024-11-26 20:07:56.190072] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.421 [2024-11-26 20:07:56.190207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.190238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.190619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.190648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.190905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.190936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.191334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.191367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.191632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.191662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.192079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.192108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.192483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.192515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.192902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.192933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.193231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.193263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.193716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.193747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.194112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.194143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.194534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.194565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.194836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.194867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.195238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.195269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.195516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.195545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.195917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.195948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.196315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.196348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.196750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.196781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.196920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.197315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.197348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.197736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.197767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.198173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.198205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.198608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.198988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.199018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.199418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.199449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.199813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.199843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.200215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.200258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.200677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.200706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.200954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.200987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.201249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.201520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.201550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.201951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.202360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.202617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.202649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.203018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.203048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.203418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.203448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.203704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.203733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 20:07:56.204115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 20:07:56.204145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.204569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 20:07:56.204598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.204953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 20:07:56.204983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.205439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 20:07:56.205472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.205848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 20:07:56.205879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.206244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 20:07:56.206276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 20:07:56.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.206723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.207371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.207403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.207747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.207777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.208007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.208036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.208416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.208447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.208804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.208834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.209205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.209238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.209616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.209646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.210032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.210060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.210423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.210455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.210808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.210839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.211197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.211228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.211597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.211626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.211984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.696 [2024-11-26 20:07:56.212014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.696 qpair failed and we were unable to recover it. 00:29:55.696 [2024-11-26 20:07:56.212408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.212438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.212824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.212852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.213242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.213273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.213641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.213669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.214024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.214053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.214480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.214510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.214858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.214888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.215239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.215269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.215633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.215662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.216029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.216064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.216415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.216446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.216682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.216710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.217076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.217106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.217374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.217789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.217819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.218215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.218247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.218619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.218650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.219033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.219063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.219406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.219436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.219813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.219842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.220226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.220257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.220643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.220675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.221082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.221112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.221554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.221585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.221992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.222356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.222394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.222766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.222794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.223047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.223076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.223342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.223373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.223775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.223804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.224054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.224086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.224459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.224491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.224853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.224882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.225250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.225280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.225715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.225744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.226097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.226511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.226542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.226899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.226927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.697 [2024-11-26 20:07:56.227370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.697 [2024-11-26 20:07:56.227401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.697 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.227769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.227798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.228179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.228210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.228570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.228600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.228968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.228998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.229396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.229427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.229773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.229803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.230044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.230074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.230471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.230502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.230750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.230780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.231182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.231215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.231572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.231601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.232015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.232044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.232411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.232443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.232785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.232815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.233181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.233212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.233586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.233615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.233978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.234006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.234342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.234372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.234642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.234671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.235064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.235094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.235463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.235494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.235894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.236243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.236274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.236668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.236698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.237052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.237082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.237432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.237464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.237714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.237742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.238099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.238129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.238406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.238437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.238767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.238797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.239215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.239245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.239614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.239642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.239962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.239991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.240356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.240386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.240758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.241186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.241216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.241576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.241605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.241962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.698 [2024-11-26 20:07:56.241992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.698 qpair failed and we were unable to recover it. 00:29:55.698 [2024-11-26 20:07:56.242348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.242385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.242629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.242658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.243010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.243039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.243401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.243431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.243802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.243830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.244224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.244256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.244515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.244545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.244929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.244958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.245322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.245352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.245734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.245763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.246137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.246179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.246522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.246552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.246920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.246948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.247320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.247350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.247733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.247762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.248128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.248156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.248534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.248564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.248899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.248929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.249283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.249314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.249627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.249656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.250027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.250056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.250419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.250450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.250815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.250845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.251201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.251232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.251465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.251493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.251716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.251744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.252095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.252124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.252488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.252519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.252867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.252895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.253251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.253282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.253638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.253668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.254059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.254419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.254449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.254810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.254839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.255222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.255254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.255586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.255615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.255973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.256002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.256367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.699 [2024-11-26 20:07:56.256397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.699 qpair failed and we were unable to recover it. 00:29:55.699 [2024-11-26 20:07:56.256763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.256791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.257149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.257194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.257542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.257571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.257973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.258302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.258332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.258661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.258690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.259042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.259071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.259432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.259462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.259827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.259857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.260225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.260255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.260607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.260636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.261017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.261267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.261299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.261692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.261721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.262073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.262103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.262458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.262488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.263255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.263286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.263663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.263693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.264056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.264087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.264428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.264459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.264793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.264823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.265188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.265219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.265585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.265614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.265877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.265905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.266270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.266302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.266675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.266705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.267094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.267454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.267485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.267842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.267871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.268232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.268268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.268618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.268647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.268877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.268906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.269286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.269317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.269678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.269707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.270076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.270104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.270503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.270533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.270948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.270978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.700 [2024-11-26 20:07:56.271337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.700 [2024-11-26 20:07:56.271368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.700 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.271759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.271789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.272168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.272200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.272603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.272633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.273024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.273055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.273402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.273434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.273822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.273851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.274217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.274247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.274638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.274669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.275049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.275466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.275499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.275887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.276267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.276298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.276649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.276679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.277063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.277091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.277494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.277524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.277888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.277919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.278294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.278726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.278754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.278999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.279027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.279302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.279334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.279707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.280116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.280146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.280555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.280587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.280948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.281334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.281364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.281727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.281755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.282122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.282152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.282570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.282599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.282938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.282969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.283351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.283383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.701 [2024-11-26 20:07:56.283650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.701 [2024-11-26 20:07:56.283679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.701 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.284033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.284061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.284420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.284461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.284711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.284740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.284985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.285016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.285364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.285394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.285609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.286012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.286040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.286405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.286435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.286787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.286815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.287155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.287200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.287569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.287599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.287948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.287976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.288321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.288352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.288717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.288746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.289148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.289190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.289542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.289572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.289927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.289956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.290374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.290404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.290752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.290782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.291189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.291221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.291507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:55.702 [2024-11-26 20:07:56.291584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.291614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.291981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.292010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.292379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.292410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.292679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.292708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.293072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.293101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.293470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.293501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.293827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.293856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.294219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.294249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.294626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.294655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.295017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.295046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.295317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.295346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.295705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.295734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.295970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.296002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.296350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.296382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.296608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.296638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.297001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.297031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.297267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.297298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.297650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.702 [2024-11-26 20:07:56.297681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.702 qpair failed and we were unable to recover it. 00:29:55.702 [2024-11-26 20:07:56.298035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.298066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.298434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.298466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.298852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.299219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.299251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.299641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.299672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.300037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.300069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.300435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.300468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.300833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.300863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.301228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.301260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.301678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.301708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.301937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.301968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.302117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.302147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.302553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.302584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.302944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.302974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.303344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.303376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.303749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.303780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.304152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.304197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.304470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.304507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.304857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.305122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.305152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.305628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.305658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.305887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.305917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.306280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.306311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.306701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.306731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.307087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.307118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.307513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.307545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.307810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.307840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.308201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.308233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.308567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.308596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.308939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.308969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.309348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.309380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.309743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.309773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.310143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.310188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.310413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.310443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.310688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.310721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.311076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.311107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.311507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.311540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.311893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.311922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.703 qpair failed and we were unable to recover it. 00:29:55.703 [2024-11-26 20:07:56.312338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.703 [2024-11-26 20:07:56.312371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.312730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.312761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.313127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.313156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.313553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.313583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.313879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.313910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.314297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.314329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.314702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.314732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.314960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.314992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.315391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.315421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.315789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.315818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.316186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.316217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.316610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.316640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.316992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.317021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.317449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.317479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.317898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.317927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.318289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.318319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.318677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.318705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.319062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.319090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.319439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.319837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.319866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.320136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.320184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.320560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.320590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.320824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.320856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.321205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.321236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.321625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.321654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.321995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.322247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.322281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.322676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.322706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.323069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.323098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.323472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.323503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.323801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.323831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.324199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.324229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.324469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.324497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.324787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.324815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.325184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.325215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.325556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.325586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.325943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.325972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.326111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-11-26 20:07:56.326556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-11-26 20:07:56.326586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.326939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.327317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.327348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.327711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.327740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.328124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.328154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.328552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.328581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.328937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.328965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.329320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.329350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.329740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.329769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.330143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.330190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.330595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.330954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.330983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.331373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.331403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.331789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.332154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.332544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.332573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.332801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.332830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.333206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.333237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.333504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.333532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.333891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.333921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.334317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.334680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.334708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.335058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.335088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.335464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.335494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.335857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.335886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.336236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.336266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.336623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.336653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.337004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.337033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.337379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.337410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.337668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.337700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.337954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.337984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.338336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.338366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.338722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.338750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.339108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-11-26 20:07:56.339137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-11-26 20:07:56.339502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.339533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.339937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.340299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.340499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.340529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.340886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.340915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.341286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.341318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.341675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.342048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.342077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.342454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.342484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.342848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.342876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.343236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.343267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.343638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.343669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.344023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.344052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.344312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.344343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.344695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.344724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.345083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.706 [2024-11-26 20:07:56.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.345127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.706 [2024-11-26 20:07:56.345131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 [2024-11-26 20:07:56.345143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.706 [2024-11-26 20:07:56.345153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.706 [2024-11-26 20:07:56.345167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.345411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.345444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.345819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.346189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.346219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.346594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.346623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.347065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.347401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.347432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.347443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:55.706 [2024-11-26 20:07:56.347597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:55.706 [2024-11-26 20:07:56.347765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:55.706 [2024-11-26 20:07:56.347849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.347878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 [2024-11-26 20:07:56.347765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.348182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.348212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.348566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.348595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.348965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.348994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.349388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.349799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.349828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.350201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.350232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.350488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.350516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.350761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.350791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.351113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.351144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.351384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.351416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.351779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.351809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.352188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-11-26 20:07:56.352218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-11-26 20:07:56.352552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.352581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.352767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.352797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.353176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.353206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.353563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.353591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.353967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.353996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.354333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.354364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.354725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.354754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.355004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.355034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.355770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.355799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.356169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.356199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.356480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.356508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.356861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.356891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.357267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.357297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.357640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.357669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.357944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.357973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.358319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.358351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.358634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.358863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.358891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.359124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.359154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.359539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.359569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.359913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.359943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.360176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.360207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.360439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.360471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.360751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.360781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.361137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.361188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.361562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.361591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.361854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.361883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.362230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.362519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.362548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.362899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.362928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.363277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.363307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.363686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.363721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.363965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-11-26 20:07:56.363993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-11-26 20:07:56.364343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.364373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.364603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.364632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.365013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.365292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.365322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.365692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.365722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.365952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.365981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.366380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.366411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.366761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.366792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.367124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.367154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.367589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.367618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.367985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.368405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.368762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.369148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.369188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.369400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.369429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.369775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.369804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.370054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.370083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.370455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.370486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.370832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.370861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.371210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.371240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.371506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.371535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.371892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.372286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.372317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.372687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.372715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.373059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.373088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.373213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.373246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.373590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.373619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.374006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.374243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.374274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.374644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.374673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.375056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.375086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.375430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.375853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-11-26 20:07:56.375883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-11-26 20:07:56.376118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.376146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.376515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.376544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.376912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.376941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.377265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.377295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.377682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.377710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.378074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.378103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.378484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.378521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.378858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.378888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.379276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.379307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.379648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.379679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.380041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.380269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.380299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.380656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.380686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.381049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.381079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.381343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.381372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.381776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.381805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.382189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.382221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.382477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.382506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.382738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.382767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.383072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.383101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.383498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.383530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.383910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.383940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.384280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.384313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.384662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.384692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.384907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.384938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.385301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.385332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.385659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.385690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.386066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.386097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.386453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.386484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.386836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.386867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.387220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.387250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.387607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.387636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.387987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.388019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.388397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.388435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.388779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.388808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.389097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.389486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.389519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.389764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.389794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 20:07:56.390144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 20:07:56.390186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.390558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.390588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.390934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.390966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.391068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.391096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.391453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.391732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.391761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.392139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.392401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.392430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.392772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.392801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.393039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.393069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.393348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.393379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.393744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.393772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.394131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.394170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.394403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.394432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.394791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.394820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.395192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.395223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.395580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.395608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.395894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.395924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.396283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.396315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.396528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.396557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.396909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.396938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.397300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.397334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.397700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.397730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.398099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.398130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 20:07:56.398510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 20:07:56.398541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.398766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.398796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.399152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.399195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.399529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.399558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.399785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.399815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.400214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.400246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.400487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.400516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.400868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.400897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.401142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.401185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.401555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.401584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.401943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.401972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.402369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.402760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.402795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.403138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.403181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.403441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.403471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.403814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.403845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.404201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.404232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.404611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.404643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.404985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.405013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.405368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.405400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.405751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.405780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.406134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.406179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.406474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.406503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.406889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.406918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.407265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.407296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.407665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.407694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.408065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.408096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.408310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.408340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.408557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.408585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.408936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.408966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.409323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.409354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.409693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.410072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.410104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.410345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.410375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.410608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.410637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.410997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.411025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.411263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.411508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.411539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.411786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 20:07:56.411815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 20:07:56.412147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.412201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.412463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.412495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.412759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.412788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.413075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.413105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.413454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.413485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.413862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.413892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.414256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.414288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.414647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.414677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.415036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.415391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.415423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.415760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.415792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.416155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.416199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.416518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.416548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.416701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.416730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.417092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.417505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.417536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.417917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.418282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.418313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.418680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.418710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.419071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.419101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.419268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.419301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.419647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.419676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.420041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.420070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.420427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.420459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.420683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.420715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.421104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.421134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.421509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.421540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.421892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.421920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.422277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.422522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 20:07:56.422551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 20:07:56.422912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.422942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.423211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.423241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.423636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.423665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.423900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.423930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.424272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.424304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.424652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.424683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.424921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.424952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.425295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.425325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.425551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.425584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.426000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.426031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.426397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.426428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.426543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.426583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.426909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.426940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.427208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.427238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.427637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.427668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.428022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.428053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.428407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.428439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.428804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.428833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.429215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.429247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.429592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.429622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.429990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.430018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.430389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.430421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.430652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.430683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.431064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.431095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.431474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.431506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.431746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.431780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.432184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.432215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.432422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.432450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.432819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.432850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.433210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.433242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.433597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.433625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.434001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.434030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.434283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.434315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.434692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.434722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.435095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.435486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 20:07:56.435797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 20:07:56.435826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.436188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.436218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.436576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.436611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.436854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.436885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.437243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.437275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.437659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.438020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.438052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.438451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.438482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.438700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.438730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.439101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.439132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.439482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.439514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.439869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.439899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.440234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.440266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.440655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.440686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.440896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.440926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.441182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.441461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.441495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.441813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.441844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.442201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.442232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.442603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.442634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.443002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.443417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.443450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.443659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.443689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.443908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.443938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.444281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.444313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.444658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.444687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.445042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.445071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.445420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.445453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.445802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.445831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.446183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.446217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.446475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.446505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.446840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.446875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.447258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.447290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.447648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.447677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 20:07:56.448077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 20:07:56.448439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.448471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.448876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.448905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.449134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.449181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.449438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.449468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.449815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.449846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.450107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.450142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.450517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.450550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.450919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.450949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.451212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.451252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.451590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.451620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.451972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.452000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.452383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.452413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.452655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.452686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.452954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.452984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.453277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.453309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.453641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.453672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.454039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.454069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.454413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.454444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.454799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.454829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.455209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.455240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.455629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.456009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.456039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.456423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.456790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.456819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.457077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.457109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.457475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.457505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.457772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.457801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.458032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.458064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.458428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.458458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.458801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.458830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 20:07:56.459187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 20:07:56.459218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.459581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.459610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.460049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.460414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.460445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.460830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.461140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.461180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.461523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.461552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.461911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.461939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.462306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.462335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.462685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.462714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.462982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.463009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.463263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.463294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.463657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.463685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.464061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.464090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.464364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.464742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.465144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.465184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.465563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.465593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.465963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.465991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.466221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.466252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.466348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.466697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.466726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.467102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.467132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.467360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.467389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.467628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.467656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.468063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.468092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.468463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.468494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.468851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.469250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.469280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.469489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.469517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.469836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.469864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.470244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.470275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.470657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.470685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 20:07:56.471066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 20:07:56.471095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.471474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.471504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.471860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.471888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.472263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.472294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.472658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.472687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.473043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.473071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.473324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.473354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.473699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.473728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.474104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.474133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.474532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.474562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.474661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.474689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.475066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.475094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.475473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.475502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.475691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.475725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.476101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.476130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.476513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.476542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.476888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.476916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.477138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.477181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.477468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.477709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.477736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.478118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.478146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.478508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.478537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.478909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.478938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.479462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.479491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.479866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.479895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.480262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.480293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.480655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.480685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.481056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.481085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.481325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.481745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.482033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.482060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.482422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.482453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.482820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.482848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.483215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.483245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.483610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.483639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.484015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.484043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.484412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 20:07:56.484442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 20:07:56.484798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.484827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.485054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.485083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.485473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.485503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.485734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.485763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.486191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.486223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.486594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.486623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.486982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.487010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.487229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.487258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.487635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.487664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.488045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.488073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.488319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.488349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.488732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.488761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.489133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.489170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.489521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.489549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.489800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.489829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.490183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.490213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.490458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.490486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.490709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.490738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.491110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.491140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.491508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.491537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.491899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.491928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.492295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.492326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.492534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.492787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.492816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.493061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.493089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.493335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.493745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.493774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.494135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.494174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.494520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.494548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.494906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.494935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.495291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.495322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.495951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.496228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.496258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.496473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.496501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.496878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.496907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.497116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.497145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.497372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.497402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.497616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 20:07:56.497645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 20:07:56.497882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.497910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.498229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.498259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.498650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.498678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.498938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.498967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.499312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.499348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.499589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.499617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.499846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.499875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.500257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.500288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 20:07:56.500653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 20:07:56.500682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.500953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.501208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.501242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.501459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.501489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.501863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.501891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.502244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.502275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.502640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.502670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.503044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.503073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.503425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.503455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.503808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.503838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.504205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.504236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.504462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.504491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.504862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.504891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.505135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.505174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.505472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.505511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.505886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.505915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.506278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.506686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.506714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.506969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.506997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.507338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.507368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.507736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.507765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.508123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.508153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.508423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.508799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.508828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.509189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.509220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.509473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.509503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.509842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.509872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.510234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.510264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.510620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.510985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.511014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.511391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.511421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.511772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.511800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-26 20:07:56.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-26 20:07:56.512078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.512428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.512460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.512824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.512855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.513216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.513247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.513611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.513640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.513985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.514021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.514385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.514416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.514791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.514820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.515179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.515210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.515578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.515781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.515809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.516200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.516231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.516472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.516501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.516893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.516922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.517145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.517192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.517574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.517603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.517962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.517991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.518207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.518237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.518640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.518669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.519059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.519511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.519542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.519752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.519782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.520113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.520142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.520553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.520582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.520805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.520833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.521119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.521147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.521372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.521401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.521631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.521659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.522001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.522031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.522412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.522443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.522811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.522840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.523204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.523233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.523576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.523611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.523947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.523977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.524341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.524375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.524718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.524748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.525080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.525110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.525457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.525488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.525747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-26 20:07:56.525775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-26 20:07:56.526147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.526188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.526427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.526815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.526843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.527056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.527084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.527439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.527470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.527816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.527845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.528224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.528254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.528613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.528644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.528999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.529028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.529377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.529407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.529791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.529819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.530196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.530226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.530443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.530471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.530838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.530866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.531220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.531251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.531487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.531515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.531889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.531917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.532252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.532281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.532516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.532545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.532795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.532823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.533181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.533212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.533585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.533614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.533971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.534000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.534268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.534645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.535006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.535034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.535395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.535425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.535780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.535809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.536179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.536209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.536783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.536818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.537045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.537073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.537409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.537440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.537786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.537815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.538038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.538072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.538313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.538343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.538693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.538721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.538860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.538888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-26 20:07:56.539125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-26 20:07:56.539154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.539407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.539440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.539812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.539840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.540212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.540244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.540601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.540630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.540928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.540956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.541314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.541344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.541712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.541740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.541994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.542302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.542333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.542699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.542728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.543093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.543507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.543886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.543915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.544266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.544296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.544630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.544659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.544925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.545135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.545175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.545559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.545589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.545933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.545961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.546220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.546249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.546605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.546633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.547005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.547034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.547316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.547351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.547453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.547481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.547724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfce10 is same with the state(6) to be set 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Write completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 Read completed with error (sct=0, sc=8) 00:29:55.996 starting I/O failed 00:29:55.996 [2024-11-26 20:07:56.548762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:55.996 [2024-11-26 20:07:56.549245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.549332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-11-26 20:07:56.549706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-11-26 20:07:56.549738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.549955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.549984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.550511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.550966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.551017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.551438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.551471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.551816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.551849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.552200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.552231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.552579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.552610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.552971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.553000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.553361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.553392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.553790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.553818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.554182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.554215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.554581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.554610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.554991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.555020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.555276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.555306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.555697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.555726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.556107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.556137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.556526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.556557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.556916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.556945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.557280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.557310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.557551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.557580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.557925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.557954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.558297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.558329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.558704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.558734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-11-26 20:07:56.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-11-26 20:07:56.558857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Write completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.997 starting I/O failed 00:29:55.997 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Write completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Read completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 Write completed with error (sct=0, sc=8) 00:29:55.998 starting I/O failed 00:29:55.998 [2024-11-26 20:07:56.559679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.998 [2024-11-26 20:07:56.559906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.560447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.560553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.561011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.561048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.561608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.561712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.562012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.562050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.562450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.562484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.562708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.562737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.563012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.563285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.563316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.563723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.563752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.563999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.564029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.564312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.564343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.564766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.564796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.565143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.565507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.565536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.565948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.566195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.566225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.566506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.566535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.566931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.566962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.567178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.567209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.567465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.567493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.567718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.567747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.568100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.568131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.568471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.568502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.568842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.568871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.569243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.569275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.569481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.569510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.569862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.569892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.570027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.570062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.570175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.570207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.570541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.570572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.570800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.570829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.571195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.571225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.571467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.571496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-11-26 20:07:56.571865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-11-26 20:07:56.571894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.572245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.572276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.572488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.572517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.572753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.572783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.573171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.573208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.573578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.573608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.573841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.573869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.574168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.574199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.574436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.574469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.574834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.574864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.575126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.575167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.575513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.575917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.575947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.576312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.576342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.576724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.576754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.577105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.577135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.577366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.577398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.577767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.577800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.578069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.578099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.578496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.578531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.578781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.578813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.579188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.579222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.579479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.579508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.579763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.579796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.580034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.580064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.580373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.580403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.580778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.580811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.581590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.581620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.582001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.582031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.582403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.582434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.582793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.582822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.583045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.583074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.583446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.583477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.583840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.583869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.583964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.583993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8920000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.584423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.584518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.584886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.584920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-11-26 20:07:56.585134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-11-26 20:07:56.585177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.585410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.585440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.585685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.585714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.586074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.586103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.586343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.586374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.586765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.586794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.587217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.587561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.587591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.587961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.587990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.588356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.588388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.588736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.588764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.588987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.589016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.589388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.589644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.589672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.589869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.589897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.590254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.590283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.590624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.591019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.591395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.591428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.591794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.591822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.591931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.591959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.592228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.592259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.592614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.592644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.593003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.593388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.593419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.593785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.593814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.594176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.594206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.594562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.594591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.594968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.594997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.595354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.595385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.595729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.595760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.595907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.595935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.596222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.596578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.596608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.596975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.597005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.597412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.597803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.597835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.598049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.598079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.598446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 20:07:56.598476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 20:07:56.598833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.598863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.599223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.599253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.599658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.599687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.600048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.600079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.600427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.600457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.600667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.600695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.601047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.601076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.601426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.601457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.601816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.601853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.602207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.602237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.602477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.602506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.602755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.602785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.603189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.603221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.603626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.603988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.604018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.604367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.604398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.604750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.604779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.605180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.605211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.605576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.605612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.605970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.606000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.606244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.606273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.606663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.606693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.607047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.607077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.607487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.607518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.607897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.607927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.608222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.608636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.608665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.608915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.608944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.609322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.609354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.609708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.609738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.610111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.610143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.610526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.610556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.610923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.610953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.611334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.611364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.611735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.611764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.612119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.612174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.612306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.612334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.612464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.612492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 20:07:56.612835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 20:07:56.612864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.613226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.613591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.613620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.613713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.613743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.614076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.614105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.614475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.614505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.614883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.614914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.615247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.615278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.615623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.615652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.615887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.615916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.616291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.616322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.616708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.616737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.616967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.616995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.617346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.617377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.617736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.617765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.618127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.618155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.618517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.618546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.618896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.618926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.619290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.619320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.619681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.619709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.620061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.620091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.620448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.620479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.620711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.620739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.621125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.621512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.621542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.621910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.621939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.622305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.622335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.622573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.622601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.622965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.622994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.623347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.623377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.623733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.623762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.623988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.624016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.624328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.624358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 20:07:56.624600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 20:07:56.624629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.624727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.625195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.625224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.625620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.625649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.626011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.626040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.626394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.626431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.626785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.626814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.627176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.627205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.627409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.627438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.627786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.627815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.628028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.628058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.628397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.628427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.628795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.628825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.629187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.629217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.629585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.629726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.629753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.629973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.630001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.630272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.630303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.630678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.630707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.631087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.631115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.631473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.631503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.631876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.631905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.632274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.632303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.632662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.632690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.633055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.633084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.633435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.633467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.633825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.633853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.634207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.634237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.634491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.634849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.634880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.635146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.635187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.635573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.635603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.635854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.635889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.636250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.636282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.636652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.636681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.637049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.637079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.637471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.637501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.637873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.637902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.638277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.638309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 20:07:56.638536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 20:07:56.638565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.638923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.638954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.639294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.639324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.639710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.639741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.640084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.640114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.640326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.640358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.640729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.640758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.641135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.641176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.641388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.641416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.641783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.641813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.642062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.642092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.642475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.642505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.642750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.642781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.643174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.643205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.643563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.643595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.643815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.643844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.644218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.644250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.644637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.644667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.645025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.645054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.645439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.645806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.645834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.646205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.646239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.646592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.646620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.646992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.647340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.647370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.647740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.647769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.648005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.648040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.648413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.648444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.648788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.648818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.649178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.649210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.649604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.649633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.649992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.650410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.650442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.650738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.650767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.651143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.651188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.651569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.651600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.651937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.651966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.652325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.652357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.652723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 20:07:56.652753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 20:07:56.653111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.653140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.653510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.653539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.653795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.653829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.654199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.654234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.654594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.654625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.654860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.654890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.655281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.655313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.655537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.655567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.655932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.655961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.656343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.656375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.656736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.656765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.657134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.657179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.657516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.657545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.657916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.657947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.658310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.658341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.658722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.658752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.659092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.659121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.659374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.659405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.659798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.660139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.660179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.660436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.660464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.660831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.660862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.661258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.661291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.661561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.661845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.661880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.662304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.662335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.662564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.662593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.662841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.662875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.663259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.663521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.663550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.663906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.663938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.664297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.664327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.664684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.664714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.665070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.665100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.665464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.665494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.665862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.665894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.666259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.666290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.666694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 20:07:56.666905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 20:07:56.666934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.667309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.667339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.667702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.667733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.668069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.668097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.668587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.668618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.668967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.668997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.669342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.669374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.669707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.669747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.670090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.670121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.670431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.670461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.670625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.671028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.671057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.671401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.671433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.671777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.671807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.672176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.672206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.672428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.672459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.672804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.672833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.673250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.673605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.673636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.673981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.674012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.674237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.674269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.674647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.674676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.674928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.674956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.675309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.675339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.675605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.675634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.675924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.675959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.676312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.676727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.676756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.676972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.677001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.677222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.677253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.677593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.677848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.677876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.678230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.678260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.678621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.678995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.679024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 20:07:56.679266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 20:07:56.679297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.679674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.679706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.680100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.680129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.680496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.680527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.680778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.680810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.681173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.681205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.681436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.681464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.681847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.681877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.682240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.682272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.682633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.682662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.682953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.683311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.683343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.683709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.683738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.683976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.684007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.684348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.684381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.684746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.684774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.685153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.685192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.685599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.685629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.685860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.685891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.686263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.686295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.686666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.686695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.686921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.686950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.687331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.687364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.687606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.687635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.688000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.688030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.688382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.688809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.688842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.689190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.689220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.689596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.689627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.689995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.690027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.690421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.690452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.690859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.691227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.691573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.691605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.691970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.691999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.692257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.692287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.692511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.692539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.692908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 20:07:56.692938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 20:07:56.693248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.693277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.693513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.693542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.693886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.693918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.694137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.694198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.694411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.694444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.694815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.694847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.695206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.695239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.695627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.695657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.696032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.696063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.696427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.696457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.696819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.696848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.697066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.697096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.697383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.697419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.697763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.697793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.698180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.698214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.698595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.698624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.698987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.699020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.699409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.699448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.699800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.699833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.700189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.700222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.700462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.700500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.700872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.700901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.701280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.701310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.701711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.701742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.701964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.701992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.702246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.702276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.702607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.702637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.703015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.703044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.703300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.703700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.703729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.704094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.704123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.704531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.704561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.704930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.704960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.705333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.705363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.705733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.705762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.705977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.706005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.706381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.706411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.706774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.706804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 20:07:56.707180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 20:07:56.707211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.707441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.707470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.707867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.707895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.708195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.708225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.708552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.708581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.708948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.708976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.709184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.709219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.709648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.710002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.710269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.710301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.710736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.710766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.711010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.711038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.711245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.711276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.711648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.711676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.712094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.712357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.712699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.712730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.712824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.712854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.713213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.713265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.713617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.713648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.713983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.714012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.714378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.714639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.714668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.714960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.714995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.715140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.715179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.715435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.715463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.715791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.715820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.716183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.716213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.716427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.716455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.716798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.716827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.717171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.717202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.717535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.717564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.717897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.718176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.718558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.718589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.718949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.718978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.719347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.719377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.719744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.719772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.720137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 20:07:56.720177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 20:07:56.720414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.720442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.720673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.720701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.721043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.721071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.721429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.721460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.721689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.721718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.722066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.722096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.722485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.722515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.722917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.723146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.723184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.723455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.723484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.723732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.723761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.724140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.724185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.724532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.724561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.724924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.724953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.725300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.725709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.725738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.725947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.725976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.726334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.726365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.726745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.726774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.727199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.727405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.727434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.727665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.727693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.728083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.728114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.728343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.728374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.728728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.729120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.729150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.729495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.729524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.729891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.729922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.730292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.730322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.730640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.730880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.730908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.731257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.731505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.731533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.731930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.731959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.732328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.732359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.732771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.733058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.733435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.733465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 20:07:56.733683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 20:07:56.733711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.733924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.733953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.734313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.734343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.734720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.734748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.735118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.735146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.735405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.735434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.735680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.735892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.736267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.736298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.736675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.736704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.736974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.737002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.737243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.737276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.737627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.737655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.738025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.738054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.738423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.738461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.738858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.738888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.739253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.739283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.739625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.739654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.739871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.739899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.740277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.740626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.740654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.740904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.740932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.741281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.741311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.741676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.741705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.742063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.742091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.742470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.742500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.742757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.743140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.743187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.743575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.743604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.743968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.743998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.744416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.744804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.745175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.745204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.745573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 20:07:56.745601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 20:07:56.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.746000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.746361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.746750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.746778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.747188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.747549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.747790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.747818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.748209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.748238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.748551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.748589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.748946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.748975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.749341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.749371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.749609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.749638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.750052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.750288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.750318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.750694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.750723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.751085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.751360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.751389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.751739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.751768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.752145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.752181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.752427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.752459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.752830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.752860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.753195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.753231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.753636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.754050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.754080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.754270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.754301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.754679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.755064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.755094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.755468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.755498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.756213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.756244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.756633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.756662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.757029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.757057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.757430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.757460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.757809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.757838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.758197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.758227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.758587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.758616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.758979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.759009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.759375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.759405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.759763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.759792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.760149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 20:07:56.760187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 20:07:56.760532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.760562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.760910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.760939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.761288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.761319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.761544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.761573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.761943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.761973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.762246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.762276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.762635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.762665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.763040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.763068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.763420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.763451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.763810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.763845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.764106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.764135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.764492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.764521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.764860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.764889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.765266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.765296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.765529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.765560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.765911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.765940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.766290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.766323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.766544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.766951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.767318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.767349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.767710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.767739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.768148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.768540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.768569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.768921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.768950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.769122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.769579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.769609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.769974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.770003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.770356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.770386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.770614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.770642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.770857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.770885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.771274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.771304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.771526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.771556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.771910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.771940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.772309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.772339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.772708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.772955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.772984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.773361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.773390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.773737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.773767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 20:07:56.773979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 20:07:56.774008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.774376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.774406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.774621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.774651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.775051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.775080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.775417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.775449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.775663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.775691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.776063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.776093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.776190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.776220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.776706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.776809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.777199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.777571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.777604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.777972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.778002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.778529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.778632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.779052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.779090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.779491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.779851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.779881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.780241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.780273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.780620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.780650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.781014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.781044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.781427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.781457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.781846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.781874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.782239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.782622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.782651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.783034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.783063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.783408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.783437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.783804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.783845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.784184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.784215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.784598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.784627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.785004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.785388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.785419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.785678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.785706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.786063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.786091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.786479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.786509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.786893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.786922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.787291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.787320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.787557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.787590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.787972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.788002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.788389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.788420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 20:07:56.788768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 20:07:56.788798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.789169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.789328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.789361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.789723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.789753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.790108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.790137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.790535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.790906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.790935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.791297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.791329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.791695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.791724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.792081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.792110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.792477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.792507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.792874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.792904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.793242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.793273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.793672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.794032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.794063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.794526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.794556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.794912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.794940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.795190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.795220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.795579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.795608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.795981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.796010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.796298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.796327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.796696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.796725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.797085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.797115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 20:07:56.797218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 20:07:56.797246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.797653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.797685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.798028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.798058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.798414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.798443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.798803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.799200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.799232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.799608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.799857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.799886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.800282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.800665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.800694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.801049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.801079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.801418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.801447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.801813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.801842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.802113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.802142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.802367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.802397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.802768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.802796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.803232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.803480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.803509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.803889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.803918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.804298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.804328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.804669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.804698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.805062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.805090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.805473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.805504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.805747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.805775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.806143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.806180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.806534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.806564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.806803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.806833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.807200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.807474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.807503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.807778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.807808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.808164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.808194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.808516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.808547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.808777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.808806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.809077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.809108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.289 [2024-11-26 20:07:56.809300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.289 [2024-11-26 20:07:56.809330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.289 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.809718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.809747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.810114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.810142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.810502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.810532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.810801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.810829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.811192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.811223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.811584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.811613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.811982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.812011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.812389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.812428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.812770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.813018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.813052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.813374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.813404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.813796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.813826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.814038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.814067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.814417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.814446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.814815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.814845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.815213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.815243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.815610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.815639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.816040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.816394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.816424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.816777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.816806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.817184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.817234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.817616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.817647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.818026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.818055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.818402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.818434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.818769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.819194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.819224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.819567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.819598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.819972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.820000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.820252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.820284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.820670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.820699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.821068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.821097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.821519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.821549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.821904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.821935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.822312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.822343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.822720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.823139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.823179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.823569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.290 qpair failed and we were unable to recover it. 00:29:56.290 [2024-11-26 20:07:56.823951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.290 [2024-11-26 20:07:56.823980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.824382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.824413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.824764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.825171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.825202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.825546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.825577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.825933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.825962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.826306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.826337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.826580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.826612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.826837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.826866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.827074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.827102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.827331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.827361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.827737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.827765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.827989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.828023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.828278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.828308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.828678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.828708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.828959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.828987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.829375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.829406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.829785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.829813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.830036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.830064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.830412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.830442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.830789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.830819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.831153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.831192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.831490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.831518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.831887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.831916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.832201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.832231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.832620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.832648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.833006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.833034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.833408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.833440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.833800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.833829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.834253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.834282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-11-26 20:07:56.834381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-11-26 20:07:56.834409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Write completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.291 starting I/O failed 00:29:56.291 Read completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Read completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Write completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 Read completed with error (sct=0, sc=8) 00:29:56.292 starting I/O failed 00:29:56.292 [2024-11-26 20:07:56.835217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.292 [2024-11-26 20:07:56.835768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.836514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.836547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.836911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.836942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.837452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.837559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd070c0 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.837951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.837984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.838373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.838405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.838758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.838786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.839105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.839135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.839499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.839530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.839770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.840102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.840131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.840524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.840554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.840775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.840803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.841185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.841216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.841476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.841509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.841865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.841893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.842232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.842262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.842636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.842996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.843025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.843431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.843461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.843795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.843824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.844214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.844245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.844589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.844618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.844958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.844987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.845339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.845369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.845764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.845795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.846169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.846199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.846528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.846559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.846907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.846936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.847319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.847350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.847718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.847748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.847961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.847990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.848386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-11-26 20:07:56.848415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-11-26 20:07:56.848774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.848803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.849026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.849055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.849382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.849414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.849766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.849796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.850192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.850223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.850599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.850629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.850999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.851028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.851395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.851761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.851790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.851960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.851989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.852210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.852240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.852952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.853196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.853225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.853433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.853462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.853824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.853852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.854080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.854108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.854462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.854492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.854718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.854747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.854963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.854991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.855349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.855379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.855739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.855768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.856129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.856165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.856539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.856569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.856917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.856946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.857187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.857218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.857580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.857609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.857876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.857904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.858264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.858294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.858553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.858580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.858760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.858789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-11-26 20:07:56.859148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-11-26 20:07:56.859189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.859613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.859641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.860000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.860028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.860255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.860286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.860653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.861051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.861079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.861440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.861469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.861699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.861727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.861964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.861992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.862361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.862390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.862762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.862791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.863244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.863274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.863612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.863640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.864014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.864043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.864407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.864436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.864829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.864858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.865076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.865112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.865481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.865512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.865873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.865901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.866272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.866303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.866675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.866703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.867078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.867107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.867494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.867523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.867887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.867916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.868281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.868311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.868703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.868731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.868998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.869026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.869260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.869289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.869517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.869545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.869924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.869953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.870317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.870348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.870710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.870739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.871098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.871126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.871547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.871787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.871815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.872198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.872228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.872469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.872497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.872747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.872775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 20:07:56.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 20:07:56.873115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.873421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.873451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.873825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.873854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.874215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.874246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.874466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.874494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.874741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.874771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.875152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.875190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.875497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.875526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.875913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.875941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.876302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.876331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.876687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.876716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.877080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.877108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.877493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.877522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.877762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.877791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.878157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.878209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.878556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.878584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.878923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.878953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.879178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.879208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.879423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.879456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.879698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.879727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.880052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.880448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.880477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.880710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.880738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.881128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.881157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.881544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.881573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.881941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.881969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.882201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.882232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.882627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.882656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.883008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.883037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.883387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.883416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.883636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.883664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.884024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.884052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.884430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.884460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.884789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.884819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.885207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.885238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.885627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.885657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.886032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.886062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.886405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.886437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 20:07:56.886782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 20:07:56.886812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.887177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.887208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.887560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.887590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.887913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.887944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.888276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.888308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.888691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.888719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.889092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.889122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.889489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.889521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.889766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.889794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.890087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.890501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.890533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.890898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.890929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.891135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.891174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.891530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.891558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.891937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.891966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.892330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.892361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.892730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.892759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.892987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.893016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.893347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.893378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.893611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.893639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.894044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.894305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.894336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.894720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.894749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.895111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.895142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.895543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.895573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.895953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.895984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.896363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.896393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.896743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.896774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.897177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.897557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.897587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.897947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.897978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.898342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.898373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.898765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.898797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.899172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.899204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.899458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.899487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.899832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.899863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.900079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.900108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.900468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.900501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 20:07:56.900862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 20:07:56.900891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.901237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.901608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.901637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.901980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.902010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.902388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.902417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.902660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.902689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.903088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.903118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.903479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.903510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.903889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.903920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.904281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.904313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.904705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.904952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.904982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.905360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.905391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.905734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.905764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.906178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.906515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.906545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.906891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.906921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.907309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.907340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.907712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.907742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.907987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.908017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.908353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.908769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.908799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.909209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.909576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.909604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.910027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.910057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.910416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.910447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.910828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.910858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.911232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.911262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.911640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.911670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.911955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.911986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.912196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.912227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.912608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.912639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.913026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.913424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.913454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.913797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.913827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.914188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.914221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.914589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.914620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.914968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.914997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 20:07:56.915393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 20:07:56.915422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.915792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.915822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.916188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.916218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.916445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.916474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.916758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.916788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.917136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.917172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.917528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.917911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.917940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.918342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.918374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.918773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.918803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.919176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.919207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.919571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.919602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.919969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.919999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.920381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.920654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.920683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.921072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.921101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.921466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.921497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.921868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.921898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.922124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.922155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.922527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.922559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.922928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.922960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.923337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.923367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.923738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.923769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.924127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.924156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.924492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.924522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.924915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.924944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.925153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.925595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.925625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.925851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.925881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.926292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.926325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.926553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.926949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.927367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 20:07:56.927402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 20:07:56.927806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.928206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.928236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.928601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.928629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.928978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.929009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.929368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.929399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.929793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.929825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.930171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.930202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.930570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.930969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.931000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.931374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.931403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.931745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.931777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.932027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.932056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.932418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.932816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.932846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.933224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.933254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.933612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.933641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.933854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.933883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.934300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.934557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.934593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.934960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.934994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.935353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.935386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.935758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.935788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.936202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.936233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.936614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.936643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.937009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.937039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.937373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.937406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.937748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.937779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.938168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.938200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.938573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.938603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.938842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.939208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.939239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.939624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.939771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.939801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.940224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.940468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.940500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.940718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.940748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.941132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.941168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.941490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 20:07:56.941520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 20:07:56.941885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.941914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.942285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.942317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.942542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.942574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.942964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.943327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.943359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.943720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.943751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.944102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.944132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.944376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.944407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.944734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.944764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.945135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.945512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.945541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.945770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.945801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.946056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.946411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.946442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.946831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.946862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.947110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.947142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.947426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.947455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.947830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.947859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.948234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.948635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.948664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.949020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.949056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.949399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.949430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.949774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.950035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.950065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.950404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.950436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.950809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.950839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.951047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.951077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.951442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.951474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.951847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.951876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.952167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.952198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.952568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.952597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.952968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.952997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.953238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.953268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.953646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.953676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.954042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.954072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.954444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.954476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.954853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.954883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.955149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 20:07:56.955186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 20:07:56.955562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.955592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.955719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.956214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.956546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.956577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.956938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.956968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.957343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.957374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.957621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.957937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.957967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.958185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.958215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.958547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.958577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.958941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.958971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.959183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.959213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.959587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.959617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.959831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.960253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.960609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.960640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.961022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.961052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.961285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.961316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.961567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.961955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.961983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.962289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.962319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.962666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.962696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.963068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.963106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.963479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.963510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.963848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.963878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.964248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.964280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.964632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.964661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.965016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.965047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.965409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.965440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.965802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.965832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.966251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.966282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.966672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.967030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.967060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.967409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.967665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.967693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.968065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.968095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.968340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.968371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.968752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 20:07:56.968782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 20:07:56.969139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.969179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.969507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.969537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.969741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.969769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.970114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.970143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.970523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.970555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.970920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.970948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.971398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.971429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.971766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.971798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.972153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.972192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.972540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.972571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.972778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.972808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.973270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.973302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.973673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.973703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.974122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.974151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.974485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.974515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.974880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.974909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.975132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.975170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.975511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.975542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.975899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.975929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.976298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.976329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.976691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.976721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.976843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.976875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.977247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.977279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.977671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.978015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.978053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.978283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.978315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.978678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.978707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.979034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.979062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.979444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.979474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.979845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.979875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.979989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.980020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.980244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 20:07:56.980274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 20:07:56.980646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.980677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.980918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.980947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.981197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.981228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.981439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.981469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.981841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.981871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.982245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.982276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.982650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.982680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.982920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.983138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.983174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.983319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.983351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.983577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.983607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.983996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.984028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.984318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.984349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.984695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.984725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.985102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.985132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.985545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.985575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.985798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.985826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.986189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.986219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.986569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.986599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.987003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.987032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.987241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.987271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.987639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.987669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.988018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.988047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.988272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.988301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.988537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.988567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.988939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.988968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.989187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.989216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.989562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.989592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 20:07:56.989960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 20:07:56.989990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.990359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.990389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.990764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.990792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.991157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.991194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.991577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.991985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.992015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.992289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.992320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.992680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.992711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.993110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.993490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.993520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.993868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.993900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.994120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.994150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.994399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.994432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.994812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.994841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.995182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.995214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.995479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.995508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.995743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.995771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.996197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.996229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.996606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.996635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 20:07:56.996996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 20:07:56.997025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.997404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.997803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.997832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.998187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.998217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.998471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.998500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.998755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.998784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.999149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.999186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.999452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.999481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:56.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:56.999731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.000130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.000182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.000555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.000585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.000947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.000977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.001377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.001409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.001661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.001690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.002042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.002071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.002426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.002456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.002819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.002849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.003222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.003252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 20:07:57.003620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 20:07:57.003649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.004051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.004080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.004374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.004721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.004751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.005133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.005170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.005495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.005525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.005891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.005920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.006131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.006578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.006607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.006867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.006895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.007232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.007263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.007560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.007936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.007967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.008224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.008255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.008586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.008615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.008989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.009017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.009360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.009391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 20:07:57.009779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 20:07:57.009808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.010062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.010333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.010363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.010806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.010835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.011185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.011216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.011460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.011488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.011873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.011902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.012252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.012283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.012677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.012706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.013048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.013076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.013431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.013462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.013675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.013704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.014063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.014091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.014440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.014470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.014595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.014623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.014978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.015006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.015257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.015287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.015570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.015599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.015942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.015971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.016263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.016293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.016682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.016711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.017113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.017534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.017564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.017961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.017990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.018204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.018234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 20:07:57.018446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 20:07:57.018484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.018853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.018882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.019101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.019129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.019510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.019540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.019895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.019924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.020282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.020317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.020526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.020555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.020910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.020939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.021302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.021332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.021689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.021718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.022083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.022110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.022489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.022519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.022878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.022908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.023176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.023206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.023557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.023586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.023949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.023980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.024078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.024108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.024641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.024672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.025072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.025102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.025385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.025416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.025773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.026032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.026061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.026418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.026450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.308 [2024-11-26 20:07:57.026760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.026790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:56.308 [2024-11-26 20:07:57.027153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.027190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.308 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.308 [2024-11-26 20:07:57.027567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.027597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.308 [2024-11-26 20:07:57.027867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.027896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.028176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.028206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.028553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.028590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.028923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.028953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.029408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.029439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.029790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.029819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.030067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.030099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.030346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.030377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.030610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.030638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.030973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.031002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.031353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.031387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.031612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.031640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.032187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.032217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.032616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.032647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.032885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.032914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.033177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.033214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.033493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.033860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.033890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.034247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 20:07:57.034279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 20:07:57.034641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.034670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.035027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.035057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.035301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.035331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.035689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.035718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.036078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.036108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.036493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.036524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.036715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.036748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.037139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.037520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.037550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.037912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.037941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.038319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.038349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.038589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.038619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.038999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.039030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.039370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.039400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.039790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.039820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.040216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.040575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.040604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.040982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.041433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.041464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.041830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.041871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.042235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.042268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.042649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.042678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.043050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.043079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.043421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.043451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.043716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.043744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.043960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.043989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.044421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.044451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 20:07:57.044829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 20:07:57.044858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.045219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.045250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.045600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.045996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.046025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.046414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.046443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.046839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.046869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.047221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.047250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.047612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.047642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.048015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.048043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.048429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.048792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.048822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.049085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.049114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.049486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.049516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.049877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.049907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.050236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.050268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.050640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.050669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.051048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.051077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.051488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.051522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.051888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.051917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.052178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.052212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.052592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.052622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.052978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.053009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.053154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.053193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.053638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.053668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.053888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.053919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.054358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.054388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.054736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.054767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.055004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.055033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.055274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.055303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.055560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.055592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.056001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.056368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.056399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.056689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.056717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.057066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.057097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.057474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.057505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.057846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.057875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.058255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.058285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.058654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.058684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.058913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.058941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.059290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.059320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.059544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.059577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.059859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.060089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.060118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.060473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.060503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.310 [2024-11-26 20:07:57.060754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.310 [2024-11-26 20:07:57.060783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.310 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.061124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.061153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.061512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.061907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.061936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.062210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.062240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.062518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.062553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.062901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.062932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.063241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.063510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.063539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.063857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.063887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.064255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.064286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.064614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.064650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.064872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.064901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.065229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.065260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.065601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.065632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.065840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.065869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.066188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.066219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.066550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.066579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.066782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.066813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.067179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.067210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.067306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.067334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.067683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.067712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.068096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.068378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.068407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.068790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.068819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.069145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.069190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.069557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.069595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.069958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.069986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.070411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.070441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.070798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.311 [2024-11-26 20:07:57.070829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.071063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.071094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.311 [2024-11-26 20:07:57.071492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.071524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.311 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.311 [2024-11-26 20:07:57.071895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.071926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.072166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.072198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.072466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.072494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.072853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.072882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.073250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.073279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.073519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.073805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.074196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.074227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.074493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.074522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.074740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.074768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.311 qpair failed and we were unable to recover it. 00:29:56.311 [2024-11-26 20:07:57.075010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.311 [2024-11-26 20:07:57.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.075388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.075677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.075706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.075910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.075940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.076298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.076328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.076548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.076576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.076917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.076945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.077171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.077201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.077443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.077473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.077809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.077837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.078153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.078199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.078534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.078563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.078898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.078926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.079292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.079322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.079683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.079713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.079942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.079974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.080321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.080351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.080715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.080744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.081124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.081152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.081539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.081568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.081966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.081995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.082377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.082407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.082760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.082788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.083169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.083199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.083567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.083596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.083870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.084217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.084248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.084609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.084638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.085002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.085031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.085412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.085441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.085799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.086175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.086206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.086566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.086596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.086969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.086997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.087398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.087428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.087643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.087672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.087961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.087991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.088277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.088713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.088742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.089099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.089128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.089328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.089358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.089750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.089787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.090202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.090550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.090579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.090805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.090834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.312 [2024-11-26 20:07:57.091193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.312 [2024-11-26 20:07:57.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.312 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.091446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.091476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.091705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.091732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.092122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.092496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.092526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.092884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.092913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.313 [2024-11-26 20:07:57.093286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.313 [2024-11-26 20:07:57.093316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.313 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.093686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.093717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.093994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.094024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.094405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.094435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.094792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.094820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.095031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.095059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.095303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.095565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.095846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.095875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.096225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-11-26 20:07:57.096255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-11-26 20:07:57.096642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.096672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.097039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.097067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.097451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.097481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.097836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.097865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.098234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.098263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.098651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.098681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.099049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.099078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.099223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.099257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.099642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.099672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.099975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.100256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.100286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.100656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.100686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.101033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.101411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.101441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.101799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.101828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.102117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.102517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.102549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.102924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.102956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.103211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.103241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.103622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.103651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.104009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.104044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.104288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.104317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.104588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.104616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.104946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.104975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.105328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.105358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.105732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.105761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.106135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.106174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 20:07:57.106535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 20:07:57.106564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.106729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.107107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.107136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.107549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.107579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.107798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.107826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.108195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.108226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.108479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.108509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.108862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.108890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.109149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.109186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.109328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.109365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.109760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.109788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.110149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.110200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.110558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.110587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.110958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.110987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.111385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.111415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.111636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.111664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.111906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.111934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.112203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.112232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.112581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.112610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.112933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.112963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.113213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.113244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.113467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.113495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.113844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.113873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.114296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.114328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.114556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.114846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.115231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.115262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.115637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.115666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 Malloc0 00:29:56.583 [2024-11-26 20:07:57.116046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 20:07:57.116328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 20:07:57.116358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.584 [2024-11-26 20:07:57.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.116781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:56.584 [2024-11-26 20:07:57.117152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.117190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.584 [2024-11-26 20:07:57.117543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.117572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.584 [2024-11-26 20:07:57.117935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.117964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.118338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.118368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.118639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.118667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.119012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.119040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.119264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.119293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.119673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.119702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.120073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.120101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.120461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.120492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.120859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.120888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.121260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.121290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.121651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.121679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.122057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.122492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.122522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.122886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.122914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.123038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.584 [2024-11-26 20:07:57.123195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.123225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.123673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.123702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.123923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.123952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.124194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.124599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.124628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.124979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.125007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.125435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.125465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.125693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.125721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.126113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.126144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.126512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.126541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.126920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.127192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 20:07:57.127222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 20:07:57.127610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.127639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.128010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.128038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.128283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.128311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.128540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.128569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.128934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.128963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.129324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.129354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.129604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.129632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.130005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.130033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.130389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.130418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.130787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.130816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.130919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.130947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.131304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.131333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.131699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.131733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.132087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.132115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b9 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.585 0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.132411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.585 [2024-11-26 20:07:57.132779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.132809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.585 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.585 [2024-11-26 20:07:57.133174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.133205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.133562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.133592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.133977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.134007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.134390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.134422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.134666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.134696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.135051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.135082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.135466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.135498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.135725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.135755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.135929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.135959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.136308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.136339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.136694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.136725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.137073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 20:07:57.137103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 20:07:57.137454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.137484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.137836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.138088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.138117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.138501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.138532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.138896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.138927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.139051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.139081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.139452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.139483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.139845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.139876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.140097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.140130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.140521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.140558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.140912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.140943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.141302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.141334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.141586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.141615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.141989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.142020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.142239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.142271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.142637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.142666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.143013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.143437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.143467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.143841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.143870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.586 [2024-11-26 20:07:57.144247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.144277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.586 [2024-11-26 20:07:57.144659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.586 [2024-11-26 20:07:57.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.144826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.586 [2024-11-26 20:07:57.145178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.145207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.145614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.145643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.145829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.145859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.146128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.146157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 20:07:57.146417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 20:07:57.146445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.146813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.146843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.147217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.147247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.147506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.147535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.147746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.147775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.148145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.148182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.148415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.148448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.148790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.148819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.149229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.149260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.149641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.150041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.150070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.150410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.150441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.150749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.151133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.151172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.151517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.151546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.151896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.151927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.152155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.152193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.152436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.152465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.152798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.152827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.153065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.153096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.153357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.153387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.153657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.153698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.154039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 20:07:57.154070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 20:07:57.154416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.154446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.154800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.155064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.155095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.155355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.155390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.155754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.155785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.156140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.156180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.156558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.156587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.588 [2024-11-26 20:07:57.156943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.156974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.588 [2024-11-26 20:07:57.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.157358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.157731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.157760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.158140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.158187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.158470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.158501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.158857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.158886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.159140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.159178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.159322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.159353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.159599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.159629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.159993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.160281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.160313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.160659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.160696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.161037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.161066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.161455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.161487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.161848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.161876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.162232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.162262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.162488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.162519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.162872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.162903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.163261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 20:07:57.163291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8918000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 20:07:57.163437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.588 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.588 [2024-11-26 20:07:57.174330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.588 [2024-11-26 20:07:57.174457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.174501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.174522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.174540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.174588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.589 20:07:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3848160 00:29:56.589 [2024-11-26 20:07:57.184148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.184251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.184277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.184290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.184301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.184328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.194164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.194242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.194267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.194276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.194284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.194304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.204216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.204326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.204343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.204350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.204357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.204373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.214179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.214259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.214276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.214283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.214290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.214306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.224154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.224226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.224243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.224250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.224256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.224273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.234042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.234103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.234120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.234127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.234133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.234155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.244087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.244174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.244192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.244199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.244205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.244222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.254293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.254372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.254389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.254396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.254403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.254420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.264282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.264351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.264367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.264375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.264381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.264397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 20:07:57.274277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.589 [2024-11-26 20:07:57.274361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.589 [2024-11-26 20:07:57.274379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.589 [2024-11-26 20:07:57.274386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.589 [2024-11-26 20:07:57.274392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.589 [2024-11-26 20:07:57.274410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.284343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.284469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.284486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.284494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.284500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.284517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.294368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.294442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.294458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.294465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.294472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.294488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.304350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.304448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.304464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.304471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.304478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.304494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.314405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.314479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.314495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.314502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.314509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.314524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.324344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.324479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.324506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.324514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.324520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.324537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.334495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.334570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.334588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.334595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.334602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.334618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.344510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.344580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.344597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.344605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.344612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.344627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.354557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.354688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.354704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.354712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.354718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.354734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.364671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.364750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.364767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.364774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.364786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.364803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.374695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.374768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.374785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.374792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.374798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.374815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 20:07:57.384659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.590 [2024-11-26 20:07:57.384723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.590 [2024-11-26 20:07:57.384739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.590 [2024-11-26 20:07:57.384749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.590 [2024-11-26 20:07:57.384756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.590 [2024-11-26 20:07:57.384772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.394687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.394755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.394771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.394779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.394785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.394802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.404677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.404768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.404785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.404793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.404799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.404815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.414722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.414802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.414820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.414829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.414836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.414853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.424726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.424801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.424837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.424848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.424855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.424880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.434621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.434716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.434735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.434744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.434750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.434768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.444759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.444827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.444843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.444851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.444858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.444875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.454844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.454957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.454998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.455008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.455015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.455039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.464842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.464932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.464951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.464960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.464967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.853 [2024-11-26 20:07:57.464985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-26 20:07:57.474863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.853 [2024-11-26 20:07:57.474925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.853 [2024-11-26 20:07:57.474942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.853 [2024-11-26 20:07:57.474949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.853 [2024-11-26 20:07:57.474956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.474973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.484908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.484979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.484996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.485003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.485009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.485026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.495001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.495111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.495128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.495141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.495148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.495170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.504944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.505037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.505053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.505061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.505067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.505085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.514924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.514983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.514999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.515007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.515013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.515030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.525019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.525092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.525108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.525115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.525122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.525138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.535051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.535119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.535135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.535143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.535149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.535172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.545056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.545123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.545140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.545147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.545153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.545175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.554993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.555075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.555095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.555103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.555109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.555127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.565144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.565226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.565246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.565255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.565265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.565283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.575201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.575272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.575290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.575298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.575304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.575321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.585209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.585313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.585330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.585337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.585344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.585361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.595232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.595303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.595320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.595327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.595334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.595350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-26 20:07:57.605244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.854 [2024-11-26 20:07:57.605310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.854 [2024-11-26 20:07:57.605326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.854 [2024-11-26 20:07:57.605333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.854 [2024-11-26 20:07:57.605340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.854 [2024-11-26 20:07:57.605356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.615331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.615406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.615422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.615430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.615436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.615451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.625320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.625393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.625409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.625422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.625431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.625447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.635332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.635397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.635414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.635421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.635427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.635444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.645412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.645479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.645496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.645505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.645513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.645531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.655412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.655482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.655500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.655507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.655513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.655530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-26 20:07:57.665430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.855 [2024-11-26 20:07:57.665499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.855 [2024-11-26 20:07:57.665516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.855 [2024-11-26 20:07:57.665523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.855 [2024-11-26 20:07:57.665529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:56.855 [2024-11-26 20:07:57.665551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.855 qpair failed and we were unable to recover it. 00:29:57.117 [2024-11-26 20:07:57.675482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.117 [2024-11-26 20:07:57.675548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.117 [2024-11-26 20:07:57.675565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.117 [2024-11-26 20:07:57.675572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.117 [2024-11-26 20:07:57.675578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.117 [2024-11-26 20:07:57.675595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-11-26 20:07:57.685484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.117 [2024-11-26 20:07:57.685551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.117 [2024-11-26 20:07:57.685568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.117 [2024-11-26 20:07:57.685576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.117 [2024-11-26 20:07:57.685583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.117 [2024-11-26 20:07:57.685600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-11-26 20:07:57.695554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.117 [2024-11-26 20:07:57.695627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.117 [2024-11-26 20:07:57.695644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.117 [2024-11-26 20:07:57.695651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.117 [2024-11-26 20:07:57.695658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.117 [2024-11-26 20:07:57.695674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-11-26 20:07:57.705597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.117 [2024-11-26 20:07:57.705689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.117 [2024-11-26 20:07:57.705706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.117 [2024-11-26 20:07:57.705713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.117 [2024-11-26 20:07:57.705719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.117 [2024-11-26 20:07:57.705736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.117 qpair failed and we were unable to recover it. 00:29:57.117 [2024-11-26 20:07:57.715589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.117 [2024-11-26 20:07:57.715654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.117 [2024-11-26 20:07:57.715671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.117 [2024-11-26 20:07:57.715678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.117 [2024-11-26 20:07:57.715685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.715701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.725604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.725688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.725704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.725712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.725718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.725734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.735679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.735804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.735820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.735828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.735834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.735852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.745667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.745766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.745782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.745789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.745795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.745811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.755576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.755636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.755658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.755666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.755672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.755688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.765742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.765842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.765858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.765866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.765873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.765889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.775809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.775926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.775943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.775950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.775957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.775973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.785674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.785742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.785761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.785769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.785775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.785793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.795850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.795952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.795969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.795976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.795982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.796010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.805835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.805900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.805916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.805924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.805930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.805947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.815914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.815993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.816009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.816017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.816023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.816039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.825923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.825983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.826000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.826007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.826014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.826030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.835942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.836055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.836072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.836080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.836087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.118 [2024-11-26 20:07:57.836103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.118 qpair failed and we were unable to recover it. 00:29:57.118 [2024-11-26 20:07:57.845846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.118 [2024-11-26 20:07:57.845934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.118 [2024-11-26 20:07:57.845952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.118 [2024-11-26 20:07:57.845959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.118 [2024-11-26 20:07:57.845966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.845982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.856039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.856150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.856173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.856180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.856187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.856204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.866033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.866104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.866120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.866127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.866133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.866149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.875927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.875996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.876012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.876020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.876026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.876042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.886091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.886163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.886186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.886193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.886199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.886215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.896196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.896273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.896291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.896298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.896305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.896322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.906119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.906183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.906200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.906207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.906214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.906230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.916191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.916261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.916277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.916285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.916291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.916307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.119 [2024-11-26 20:07:57.926241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.119 [2024-11-26 20:07:57.926310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.119 [2024-11-26 20:07:57.926326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.119 [2024-11-26 20:07:57.926334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.119 [2024-11-26 20:07:57.926345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.119 [2024-11-26 20:07:57.926362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.119 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.936332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.936408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.936424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.936432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.936438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.936454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.946265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.946326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.946342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.946349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.946356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.946372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.956262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.956332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.956348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.956355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.956362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.956377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.966313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.966385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.966401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.966408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.966415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.966432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.976399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.976475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.976491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.976498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.976504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.976521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.382 [2024-11-26 20:07:57.986428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.382 [2024-11-26 20:07:57.986497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.382 [2024-11-26 20:07:57.986513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.382 [2024-11-26 20:07:57.986521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.382 [2024-11-26 20:07:57.986527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.382 [2024-11-26 20:07:57.986544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.382 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:57.996427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:57.996490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:57.996506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:57.996513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:57.996520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:57.996536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.006468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.006537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.006554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.006561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.006567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.006584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.016516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.016589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.016610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.016617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.016624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.016640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.026515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.026601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.026617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.026625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.026631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.026647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.036535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.036602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.036618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.036626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.036632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.036649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.046541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.046611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.046628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.046635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.046642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.046657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.056636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.056710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.056727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.056740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.056747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.056763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.066530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.066595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.066610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.066618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.066625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.066641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.076689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.076766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.076783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.076790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.076798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.076813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.086712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.086782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.086800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.086808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.086814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.086831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.096739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.096811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.096828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.096835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.096842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.096858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.106637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.383 [2024-11-26 20:07:58.106714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.383 [2024-11-26 20:07:58.106731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.383 [2024-11-26 20:07:58.106738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.383 [2024-11-26 20:07:58.106744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.383 [2024-11-26 20:07:58.106761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.383 qpair failed and we were unable to recover it. 00:29:57.383 [2024-11-26 20:07:58.116811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.116881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.116900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.116908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.116917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.116935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.126838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.126937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.126955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.126963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.126969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.126986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.136886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.136967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.136984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.136992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.136998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.137014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.146885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.146994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.147011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.147024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.147032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.147051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.156772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.156872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.156891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.156898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.156905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.156922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.166928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.167004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.167021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.167028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.167035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.167051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.176991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.177062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.177078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.177086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.177092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.177109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.186974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.187037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.187054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.187066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.187073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.187090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.384 [2024-11-26 20:07:58.197022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.384 [2024-11-26 20:07:58.197083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.384 [2024-11-26 20:07:58.197099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.384 [2024-11-26 20:07:58.197106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.384 [2024-11-26 20:07:58.197112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.384 [2024-11-26 20:07:58.197128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.384 qpair failed and we were unable to recover it. 00:29:57.646 [2024-11-26 20:07:58.207068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.646 [2024-11-26 20:07:58.207134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.646 [2024-11-26 20:07:58.207150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.646 [2024-11-26 20:07:58.207162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.646 [2024-11-26 20:07:58.207169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.646 [2024-11-26 20:07:58.207185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.646 qpair failed and we were unable to recover it. 00:29:57.646 [2024-11-26 20:07:58.217073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.646 [2024-11-26 20:07:58.217167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.646 [2024-11-26 20:07:58.217183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.646 [2024-11-26 20:07:58.217191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.646 [2024-11-26 20:07:58.217197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.646 [2024-11-26 20:07:58.217214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.646 qpair failed and we were unable to recover it. 00:29:57.646 [2024-11-26 20:07:58.227106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.646 [2024-11-26 20:07:58.227167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.646 [2024-11-26 20:07:58.227184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.646 [2024-11-26 20:07:58.227192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.646 [2024-11-26 20:07:58.227198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.646 [2024-11-26 20:07:58.227221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.646 qpair failed and we were unable to recover it. 00:29:57.646 [2024-11-26 20:07:58.237136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.237198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.237215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.237222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.237228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.237244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.247244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.247324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.247339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.247347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.247353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.247369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.257269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.257332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.257348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.257356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.257362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.257379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.267247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.267320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.267336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.267343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.267349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.267366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.277264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.277363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.277380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.277387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.277393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.277409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.287326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.287397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.287413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.287420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.287427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.287443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.297380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.297487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.297503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.297511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.297517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.297532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.307338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.307399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.307414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.307422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.307428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.307445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.317414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.317471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.317491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.317499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.317506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.317522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.647 [2024-11-26 20:07:58.327438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.647 [2024-11-26 20:07:58.327505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.647 [2024-11-26 20:07:58.327520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.647 [2024-11-26 20:07:58.327527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.647 [2024-11-26 20:07:58.327533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.647 [2024-11-26 20:07:58.327550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.647 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.337492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.337565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.337580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.337587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.337593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.337610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.347479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.347546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.347562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.347570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.347576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.347592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.357512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.357582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.357637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.357646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.357657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.357686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.367599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.367667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.367685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.367693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.367700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.367717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.377614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.377687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.377704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.377711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.377718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.377734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.387558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.387630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.387646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.387653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.387660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.387676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.397618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.397678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.397695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.397703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.397709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.397725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.407664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.407734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.407751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.407758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.407765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.407781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.417727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.417792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.417808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.417816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.417822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.417839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.648 [2024-11-26 20:07:58.427667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.648 [2024-11-26 20:07:58.427727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.648 [2024-11-26 20:07:58.427744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.648 [2024-11-26 20:07:58.427751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.648 [2024-11-26 20:07:58.427758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.648 [2024-11-26 20:07:58.427774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.648 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-26 20:07:58.437743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.649 [2024-11-26 20:07:58.437822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.649 [2024-11-26 20:07:58.437838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.649 [2024-11-26 20:07:58.437845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.649 [2024-11-26 20:07:58.437852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.649 [2024-11-26 20:07:58.437868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-26 20:07:58.447767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.649 [2024-11-26 20:07:58.447881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.649 [2024-11-26 20:07:58.447902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.649 [2024-11-26 20:07:58.447909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.649 [2024-11-26 20:07:58.447916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.649 [2024-11-26 20:07:58.447932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.649 [2024-11-26 20:07:58.457831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.649 [2024-11-26 20:07:58.457912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.649 [2024-11-26 20:07:58.457947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.649 [2024-11-26 20:07:58.457957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.649 [2024-11-26 20:07:58.457965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.649 [2024-11-26 20:07:58.457988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.649 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.467830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.467926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.467961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.467971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.467979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.468002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.477852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.477923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.477958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.477968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.477975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.477998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.487887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.487973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.488009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.488019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.488032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.488056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.497956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.498034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.498053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.498061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.498068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.498085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.507954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.508062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.508080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.508088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.508095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.508112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.517955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.518040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.518056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.518064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.518070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.518086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.527986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.528047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.528063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.528070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.528077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.528093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.538020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.538081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.538096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.538103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.538110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.538125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.548006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.548059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.548074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.548081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.548087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.548103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.558094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.558155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.558175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.558182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.558188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.558204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.568091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.568150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.909 [2024-11-26 20:07:58.568170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.909 [2024-11-26 20:07:58.568177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.909 [2024-11-26 20:07:58.568183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.909 [2024-11-26 20:07:58.568199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.909 qpair failed and we were unable to recover it. 00:29:57.909 [2024-11-26 20:07:58.578074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.909 [2024-11-26 20:07:58.578169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.578188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.578195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.578201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.578217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.588162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.588222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.588237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.588244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.588250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.588265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.598214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.598271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.598285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.598292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.598299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.598313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.608175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.608285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.608299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.608306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.608312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.608327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.618186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.618243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.618257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.618272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.618278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.618293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.628244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.628306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.628320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.628327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.628333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.628347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.638273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.638331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.638344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.638351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.638357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.638372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.648309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.648372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.648385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.648393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.648400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.648416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.658316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.658369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.658382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.658388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.658395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.658409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.668355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.668410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.668423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.668430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.668436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.668450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.678401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.678493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.678507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.678514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.678520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.678534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.688428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.688483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.688496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.688503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.688509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.688523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.698324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.698373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.698386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.698393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.698399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.698413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.708408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.708462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.708476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.708483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.708489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.708503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:57.910 [2024-11-26 20:07:58.718459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.910 [2024-11-26 20:07:58.718524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.910 [2024-11-26 20:07:58.718536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.910 [2024-11-26 20:07:58.718543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.910 [2024-11-26 20:07:58.718549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:57.910 [2024-11-26 20:07:58.718563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.910 qpair failed and we were unable to recover it. 00:29:58.170 [2024-11-26 20:07:58.728498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.170 [2024-11-26 20:07:58.728561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.728574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.728581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.728587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.728601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.738518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.738570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.738585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.738592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.738598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.738616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.748499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.748548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.748561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.748572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.748578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.748592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.758566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.758654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.758667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.758675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.758681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.758695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.768629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.768681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.768694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.768701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.768707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.768721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.778627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.778683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.778696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.778703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.778709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.778723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.788625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.788679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.788692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.788699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.788705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.788723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.798618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.798665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.798678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.798685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.798691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.798705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.808716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.808771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.808784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.808790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.808797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.808811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.818717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.818770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.818783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.818790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.818797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.818810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.828728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.828777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.828790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.828797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.828803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.828817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.838761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.838835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.838848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.838854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.838861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.838874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.171 qpair failed and we were unable to recover it. 00:29:58.171 [2024-11-26 20:07:58.848819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.171 [2024-11-26 20:07:58.848877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.171 [2024-11-26 20:07:58.848901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.171 [2024-11-26 20:07:58.848910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.171 [2024-11-26 20:07:58.848917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.171 [2024-11-26 20:07:58.848936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.858701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.858770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.858784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.858792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.858798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.858813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.868847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.868894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.868908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.868915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.868921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.868935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.878875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.878923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.878940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.878947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.878953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.878967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.888948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.889008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.889032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.889041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.889047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.889067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.898931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.898986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.899010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.899020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.899028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.899048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.908880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.908930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.908945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.908953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.908959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.908975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.918985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.919047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.919061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.919068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.919079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.919094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.929030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.929110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.929124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.929131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.929137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.929151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.939035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.939088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.939101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.939108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.939115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.939129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.949065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.949112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.949125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.949132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.949138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.949152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.958977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.959038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.959051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.959058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.959064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.959078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.969167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.969223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.969236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.969243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.969249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.969263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.172 [2024-11-26 20:07:58.979179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.172 [2024-11-26 20:07:58.979229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.172 [2024-11-26 20:07:58.979242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.172 [2024-11-26 20:07:58.979249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.172 [2024-11-26 20:07:58.979255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.172 [2024-11-26 20:07:58.979269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.172 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:58.989169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:58.989266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:58.989280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:58.989287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:58.989293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:58.989308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:58.999072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:58.999118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:58.999131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:58.999139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:58.999145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:58.999162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.009257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.009311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.009327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.009334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.009341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.009355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.019218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.019268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.019281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.019289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.019295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.019309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.029249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.029296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.029309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.029316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.029322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.029336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.039293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.039354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.039367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.039374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.039380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.039394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.049319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.049371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.049384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.049391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.049401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.049415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.059383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.059435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.059448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.059454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.059461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.059475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.069404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.069452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.069464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.069471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.069478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.069492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.079424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.079475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.079488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.079495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.079501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.079515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.089468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.089547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.089560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.089567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.435 [2024-11-26 20:07:59.089573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.435 [2024-11-26 20:07:59.089587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.435 qpair failed and we were unable to recover it. 00:29:58.435 [2024-11-26 20:07:59.099457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.435 [2024-11-26 20:07:59.099512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.435 [2024-11-26 20:07:59.099525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.435 [2024-11-26 20:07:59.099532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.099539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.099553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.109491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.109538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.109551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.109557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.109564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.109577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.119519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.119571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.119584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.119591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.119597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.119611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.129577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.129628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.129641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.129648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.129654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.129667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.139574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.139629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.139648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.139658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.139667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.139682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.149603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.149653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.149668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.149675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.149682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.149700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.159635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.159722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.159735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.159743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.159749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.159763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.169715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.169766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.169779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.169788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.169795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.169809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.179682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.179739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.179752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.179762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.179769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.179783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.189722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.189811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.189824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.189831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.189838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.189851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.199646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.199706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.199719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.199726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.199732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.199746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.209810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.209901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.209913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.209920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.209926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.209940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.219798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.219847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.219860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.219868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.219874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.436 [2024-11-26 20:07:59.219892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.436 qpair failed and we were unable to recover it. 00:29:58.436 [2024-11-26 20:07:59.229811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.436 [2024-11-26 20:07:59.229884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.436 [2024-11-26 20:07:59.229897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.436 [2024-11-26 20:07:59.229904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.436 [2024-11-26 20:07:59.229910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.437 [2024-11-26 20:07:59.229924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-11-26 20:07:59.239850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.437 [2024-11-26 20:07:59.239903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.437 [2024-11-26 20:07:59.239928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.437 [2024-11-26 20:07:59.239937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.437 [2024-11-26 20:07:59.239944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.437 [2024-11-26 20:07:59.239963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.437 [2024-11-26 20:07:59.249891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.437 [2024-11-26 20:07:59.249957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.437 [2024-11-26 20:07:59.249982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.437 [2024-11-26 20:07:59.249990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.437 [2024-11-26 20:07:59.249998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.437 [2024-11-26 20:07:59.250017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.437 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.259906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.259956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.259972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.259979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.259986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.260001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.269922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.269974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.269988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.269996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.270002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.270016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.279948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.279994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.280007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.280014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.280020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.280035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.290015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.290071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.290084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.290091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.290097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.290111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.300009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.300056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.300069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.300076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.300082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.300097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.310016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.310078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.310091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.310102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.310109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.310123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.320054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.320105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.320118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.320125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.320131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.320145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.699 qpair failed and we were unable to recover it. 00:29:58.699 [2024-11-26 20:07:59.330124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.699 [2024-11-26 20:07:59.330181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.699 [2024-11-26 20:07:59.330194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.699 [2024-11-26 20:07:59.330201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.699 [2024-11-26 20:07:59.330208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.699 [2024-11-26 20:07:59.330222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.340123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.340177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.340190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.340198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.340204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.340218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.350132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.350227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.350241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.350248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.350254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.350276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.360177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.360227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.360241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.360248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.360254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.360268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.370216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.370280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.370293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.370300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.370306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.370321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.380301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.380351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.380364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.380371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.380378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.380392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.390295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.390344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.390357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.390364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.390371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.390385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.400300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.400347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.400361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.400369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.400376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.400391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.410359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.410445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.410458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.410465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.410471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.410485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.420353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.420413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.420426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.420433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.420440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.420454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.430379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.430450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.430463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.430470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.430476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.430490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.700 [2024-11-26 20:07:59.440276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.700 [2024-11-26 20:07:59.440321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.700 [2024-11-26 20:07:59.440338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.700 [2024-11-26 20:07:59.440345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.700 [2024-11-26 20:07:59.440351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.700 [2024-11-26 20:07:59.440366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.700 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.450344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.450398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.450413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.450420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.450426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.450440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.460464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.460515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.460529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.460536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.460543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.460557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.470478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.470527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.470541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.470548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.470554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.470568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.480492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.480539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.480552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.480559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.480569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.480583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.490568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.490647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.490660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.490667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.490674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.490688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.500556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.500610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.500623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.500630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.500636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.500650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.701 [2024-11-26 20:07:59.510572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.701 [2024-11-26 20:07:59.510641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.701 [2024-11-26 20:07:59.510654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.701 [2024-11-26 20:07:59.510661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.701 [2024-11-26 20:07:59.510668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.701 [2024-11-26 20:07:59.510682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.701 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-26 20:07:59.520473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.963 [2024-11-26 20:07:59.520524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.963 [2024-11-26 20:07:59.520537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.963 [2024-11-26 20:07:59.520544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.963 [2024-11-26 20:07:59.520550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.963 [2024-11-26 20:07:59.520564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-26 20:07:59.530673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.963 [2024-11-26 20:07:59.530729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.963 [2024-11-26 20:07:59.530743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.963 [2024-11-26 20:07:59.530750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.963 [2024-11-26 20:07:59.530757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.963 [2024-11-26 20:07:59.530774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-26 20:07:59.540662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.963 [2024-11-26 20:07:59.540710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.963 [2024-11-26 20:07:59.540725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.963 [2024-11-26 20:07:59.540732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.963 [2024-11-26 20:07:59.540738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.963 [2024-11-26 20:07:59.540753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-26 20:07:59.550664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.963 [2024-11-26 20:07:59.550716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.963 [2024-11-26 20:07:59.550729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.963 [2024-11-26 20:07:59.550736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.963 [2024-11-26 20:07:59.550742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.963 [2024-11-26 20:07:59.550756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.963 qpair failed and we were unable to recover it. 00:29:58.963 [2024-11-26 20:07:59.560731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.963 [2024-11-26 20:07:59.560776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.963 [2024-11-26 20:07:59.560789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.963 [2024-11-26 20:07:59.560796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.963 [2024-11-26 20:07:59.560802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.963 [2024-11-26 20:07:59.560816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.570683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.570737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.570753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.570760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.570766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.570780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.580778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.580829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.580841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.580848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.580854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.580868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.590754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.590819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.590832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.590839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.590845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.590859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.600819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.600872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.600885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.600892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.600898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.600912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.610855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.610910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.610923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.610930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.610939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.610953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.620890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.620940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.620953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.620960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.620967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.620980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.630877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.630930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.630955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.630964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.630971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.630990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.640930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.640985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.641009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.641018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.641025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.641045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.651009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.651066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.651081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.651090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.651096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.964 [2024-11-26 20:07:59.651111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.964 qpair failed and we were unable to recover it. 00:29:58.964 [2024-11-26 20:07:59.660991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.964 [2024-11-26 20:07:59.661043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.964 [2024-11-26 20:07:59.661057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.964 [2024-11-26 20:07:59.661064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.964 [2024-11-26 20:07:59.661071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.661085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.671011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.671059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.671073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.671080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.671086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.671104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.681046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.681094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.681108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.681115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.681122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.681136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.691120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.691176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.691189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.691196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.691203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.691217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.701105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.701183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.701201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.701208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.701214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.701228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.711154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.711245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.711259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.711266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.711273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.711287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.721147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.721201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.721214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.721221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.721227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.721242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.731227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.731282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.731295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.731302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.731308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.731323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.741213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.741302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.741315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.741325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.741332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.741346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.751232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.751279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.751292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.751299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.751305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.751319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.761262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.965 [2024-11-26 20:07:59.761308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.965 [2024-11-26 20:07:59.761321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.965 [2024-11-26 20:07:59.761328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.965 [2024-11-26 20:07:59.761334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.965 [2024-11-26 20:07:59.761348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.965 qpair failed and we were unable to recover it. 00:29:58.965 [2024-11-26 20:07:59.771324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.966 [2024-11-26 20:07:59.771399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.966 [2024-11-26 20:07:59.771412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.966 [2024-11-26 20:07:59.771419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.966 [2024-11-26 20:07:59.771425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:58.966 [2024-11-26 20:07:59.771439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.966 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.781320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.781366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.781379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.781386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.781393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.781410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.791337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.791381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.791394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.791401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.791407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.791421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.801370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.801415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.801428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.801435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.801442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.801456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.811333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.811387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.811400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.811407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.811413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.811427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.821397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.821452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.821464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.821472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.821478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.821492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.831443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.831496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.831509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.831516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.831522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.831536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.841448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.841496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.841509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.841516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.841522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.841536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.851434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.851488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.851501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.851508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.851514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.851528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.861539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.861586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.861600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.861606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.861613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.861626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.871548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.871594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.871607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.871617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.871624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.871638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.881570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.881666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.229 [2024-11-26 20:07:59.881679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.229 [2024-11-26 20:07:59.881686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.229 [2024-11-26 20:07:59.881693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.229 [2024-11-26 20:07:59.881707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.229 qpair failed and we were unable to recover it. 00:29:59.229 [2024-11-26 20:07:59.891638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.229 [2024-11-26 20:07:59.891724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.891737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.891744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.891750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.891764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.901649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.901696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.901709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.901716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.901723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.901736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.911661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.911740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.911753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.911759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.911766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.911784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.921703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.921798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.921812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.921818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.921825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.921839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.931634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.931686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.931699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.931706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.931712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.931726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.941721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.941773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.941786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.941792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.941799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.941813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.951639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.951686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.951700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.951707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.951713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.951728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.961783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.961867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.961880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.961887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.961894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.961908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.971882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.971958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.971972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.971978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.971985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.971999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.981823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.981872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.981885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.981891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.981898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.981911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:07:59.991736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:07:59.991785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:07:59.991798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:07:59.991805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:07:59.991811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:07:59.991825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:08:00.002457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:08:00.002553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:08:00.002571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:08:00.002579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:08:00.002585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:08:00.002599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:08:00.011995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:08:00.012069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.230 [2024-11-26 20:08:00.012083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.230 [2024-11-26 20:08:00.012090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.230 [2024-11-26 20:08:00.012096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.230 [2024-11-26 20:08:00.012110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.230 qpair failed and we were unable to recover it. 00:29:59.230 [2024-11-26 20:08:00.021894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.230 [2024-11-26 20:08:00.021946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-26 20:08:00.021960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-26 20:08:00.021968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-26 20:08:00.021975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.231 [2024-11-26 20:08:00.021989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-26 20:08:00.031988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-26 20:08:00.032039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-26 20:08:00.032053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-26 20:08:00.032061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-26 20:08:00.032068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.231 [2024-11-26 20:08:00.032082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.231 [2024-11-26 20:08:00.041995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.231 [2024-11-26 20:08:00.042046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.231 [2024-11-26 20:08:00.042060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.231 [2024-11-26 20:08:00.042067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.231 [2024-11-26 20:08:00.042080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.231 [2024-11-26 20:08:00.042095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.231 qpair failed and we were unable to recover it. 00:29:59.491 [2024-11-26 20:08:00.052073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.052134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.052147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.052154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.052165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.052180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.062058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.062111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.062124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.062131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.062137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.062151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.072085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.072136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.072149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.072156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.072167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.072181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.082112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.082163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.082176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.082183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.082190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.082204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.092192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.092252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.092265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.092272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.092279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.092293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.102185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.102235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.102248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.102256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.102262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.102276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.112216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.112271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.112284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.112291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.112298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.112312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.122232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.122330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.122343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.122350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.122357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.122371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.132184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.132241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.132257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.132264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.132270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.132284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.142282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.142336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.142349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.142356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.142362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.142376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.152187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.152251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.152265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.152273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.152279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.492 [2024-11-26 20:08:00.152294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.492 qpair failed and we were unable to recover it. 00:29:59.492 [2024-11-26 20:08:00.162335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.492 [2024-11-26 20:08:00.162386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.492 [2024-11-26 20:08:00.162399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.492 [2024-11-26 20:08:00.162406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.492 [2024-11-26 20:08:00.162413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.162427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.172409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.172466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.172478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.172485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.172495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.172509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.182396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.182445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.182458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.182466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.182472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.182487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.192419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.192461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.192475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.192482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.192488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.192503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.202459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.202509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.202522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.202529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.202536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.202550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.212543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.212601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.212614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.212621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.212627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.212641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.222514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.222564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.222577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.222584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.222590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.222604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.232518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.232568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.232581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.232588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.232594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.232608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.242528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.242575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.242588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.242595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.242601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.242615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.252632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.252685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.252698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.252705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.252711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.252725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.262609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.262662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.262678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.262685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.262691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.262705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.272588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.272630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.272643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.272650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.272657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.272671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.282664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.282712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.282725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.282732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.282738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.493 [2024-11-26 20:08:00.282752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.493 qpair failed and we were unable to recover it. 00:29:59.493 [2024-11-26 20:08:00.292711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.493 [2024-11-26 20:08:00.292763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.493 [2024-11-26 20:08:00.292776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.493 [2024-11-26 20:08:00.292783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.493 [2024-11-26 20:08:00.292790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.494 [2024-11-26 20:08:00.292804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.494 qpair failed and we were unable to recover it. 00:29:59.494 [2024-11-26 20:08:00.302701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.494 [2024-11-26 20:08:00.302753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.494 [2024-11-26 20:08:00.302767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.494 [2024-11-26 20:08:00.302777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.494 [2024-11-26 20:08:00.302784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.494 [2024-11-26 20:08:00.302798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.494 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.312740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.312833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.756 [2024-11-26 20:08:00.312846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.756 [2024-11-26 20:08:00.312853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.756 [2024-11-26 20:08:00.312859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.756 [2024-11-26 20:08:00.312873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.756 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.322766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.322813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.756 [2024-11-26 20:08:00.322826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.756 [2024-11-26 20:08:00.322833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.756 [2024-11-26 20:08:00.322840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.756 [2024-11-26 20:08:00.322853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.756 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.332848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.332905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.756 [2024-11-26 20:08:00.332918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.756 [2024-11-26 20:08:00.332925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.756 [2024-11-26 20:08:00.332931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.756 [2024-11-26 20:08:00.332945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.756 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.342846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.342901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.756 [2024-11-26 20:08:00.342926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.756 [2024-11-26 20:08:00.342935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.756 [2024-11-26 20:08:00.342942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.756 [2024-11-26 20:08:00.342966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.756 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.352844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.352899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.756 [2024-11-26 20:08:00.352913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.756 [2024-11-26 20:08:00.352921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.756 [2024-11-26 20:08:00.352928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.756 [2024-11-26 20:08:00.352943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.756 qpair failed and we were unable to recover it. 00:29:59.756 [2024-11-26 20:08:00.362893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.756 [2024-11-26 20:08:00.362943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.362956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.362963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.362970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.362984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.372949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.373024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.373038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.373045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.373051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.373065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.382940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.382994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.383019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.383028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.383035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.383054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.392924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.392983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.392998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.393006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.393012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.393028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.402977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.403026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.403040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.403048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.403055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.403070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.413045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.413099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.413112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.413119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.413126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.413140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.423064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.423127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.423151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.423161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.423168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.423187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.433067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.433110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.433123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.433134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.433141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.433155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.443086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.443137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.443150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.443161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.443168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.443182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.453164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.453220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.757 [2024-11-26 20:08:00.453233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.757 [2024-11-26 20:08:00.453240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.757 [2024-11-26 20:08:00.453247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.757 [2024-11-26 20:08:00.453261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.757 qpair failed and we were unable to recover it. 00:29:59.757 [2024-11-26 20:08:00.463034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.757 [2024-11-26 20:08:00.463083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.463096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.463103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.463109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.463123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.473183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.473232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.473245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.473252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.473258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.473276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.483196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.483253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.483266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.483274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.483280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.483294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.493288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.493347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.493360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.493367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.493373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.493388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.503273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.503329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.503342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.503349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.503355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.503369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.513294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.513380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.513393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.513399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.513406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.513419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.523307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.523354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.523367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.523374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.523380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.523394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.533370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.533427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.533440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.533446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.533453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.533467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.543282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.543381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.543395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.543402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.543408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.543422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.553384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.758 [2024-11-26 20:08:00.553441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.758 [2024-11-26 20:08:00.553454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.758 [2024-11-26 20:08:00.553462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.758 [2024-11-26 20:08:00.553468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.758 [2024-11-26 20:08:00.553482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.758 qpair failed and we were unable to recover it. 00:29:59.758 [2024-11-26 20:08:00.563395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.759 [2024-11-26 20:08:00.563443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.759 [2024-11-26 20:08:00.563460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.759 [2024-11-26 20:08:00.563467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.759 [2024-11-26 20:08:00.563473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:29:59.759 [2024-11-26 20:08:00.563487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.759 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.573492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.573545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.573559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.573566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.573572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.573586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.583465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.583515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.583528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.583535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.583541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.583555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.593507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.593570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.593582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.593589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.593595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.593609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.603522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.603607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.603620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.603627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.603637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.603652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.613611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.613693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.613706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.613713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.613719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.613733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.623613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.623658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.623671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.623678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.623684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.623697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.633606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.633656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.633669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.633676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.633682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.633696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.643631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.643681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.643695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.643702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.643709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.643726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.653710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.653760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.653774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.653781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.653787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.653801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.663580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.663633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.663646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.663653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.663659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.663673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.673661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.673711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.673725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.673732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.673738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.673752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.683720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.683768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.683780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.683787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.683794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.683807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.693826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.693880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.693897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.693904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.693910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.693924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.703792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.703883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.703896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.703903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.703910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.703924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.713822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.713865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.713879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.713886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.713892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.713906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.723846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.723896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.723920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.723929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.723936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.723956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.733942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.734046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.734061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.734068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.734083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.734099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.743887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.743943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.743956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.743963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.743970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.743984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.753939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.753982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.753996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.754003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.754009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.754023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.763956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.764001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.764014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.764021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.764027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.764041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.774044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.774104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.774118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.774125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.774131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.021 [2024-11-26 20:08:00.774145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.021 qpair failed and we were unable to recover it. 00:30:00.021 [2024-11-26 20:08:00.784044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.021 [2024-11-26 20:08:00.784134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.021 [2024-11-26 20:08:00.784147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.021 [2024-11-26 20:08:00.784154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.021 [2024-11-26 20:08:00.784164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.784179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.022 [2024-11-26 20:08:00.794028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.022 [2024-11-26 20:08:00.794077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.022 [2024-11-26 20:08:00.794090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.022 [2024-11-26 20:08:00.794097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.022 [2024-11-26 20:08:00.794103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.794117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.022 [2024-11-26 20:08:00.804062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.022 [2024-11-26 20:08:00.804108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.022 [2024-11-26 20:08:00.804122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.022 [2024-11-26 20:08:00.804129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.022 [2024-11-26 20:08:00.804135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.804149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.022 [2024-11-26 20:08:00.814152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.022 [2024-11-26 20:08:00.814258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.022 [2024-11-26 20:08:00.814273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.022 [2024-11-26 20:08:00.814280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.022 [2024-11-26 20:08:00.814287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.814304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.022 [2024-11-26 20:08:00.824009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.022 [2024-11-26 20:08:00.824059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.022 [2024-11-26 20:08:00.824077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.022 [2024-11-26 20:08:00.824084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.022 [2024-11-26 20:08:00.824090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.824105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.022 [2024-11-26 20:08:00.834194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.022 [2024-11-26 20:08:00.834285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.022 [2024-11-26 20:08:00.834298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.022 [2024-11-26 20:08:00.834305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.022 [2024-11-26 20:08:00.834312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.022 [2024-11-26 20:08:00.834326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.022 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.844161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.844210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.844224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.844231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.844237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.844252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.854233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.854287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.854300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.854307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.854314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.854328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.864263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.864310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.864323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.864334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.864340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.864355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.874254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.874304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.874317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.874324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.874330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.874344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.884294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.884341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.884354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.884361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.884368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.884381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.894366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.894418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.894431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.894438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.894445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.894459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.904362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.904443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.904456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.904464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.904470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.904488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.914333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.914379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.914392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.914399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.914406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.914420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.924401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.286 [2024-11-26 20:08:00.924456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.286 [2024-11-26 20:08:00.924469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.286 [2024-11-26 20:08:00.924476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.286 [2024-11-26 20:08:00.924483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.286 [2024-11-26 20:08:00.924496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.286 qpair failed and we were unable to recover it. 00:30:00.286 [2024-11-26 20:08:00.934476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.934532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.934545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.934552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.934558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.934572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.944474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.944525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.944538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.944545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.944551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.944564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.954478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.954534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.954549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.954556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.954562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.954580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.964503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.964574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.964588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.964594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.964601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.964615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.974530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.974582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.974595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.974602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.974608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.974622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.984453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.984526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.984539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.984546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.984552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.984566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:00.994665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:00.994751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:00.994763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:00.994774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:00.994780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:00.994794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.004592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.004637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.004650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.004657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.004663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.004678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.014690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.014743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.014756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.014763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.014769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.014783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.024682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.024733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.024746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.024754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.024760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.024773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.034600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.034683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.034696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.034703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.034710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.034727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.044692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.044738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.044751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.044758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.044764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.044778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.054809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.287 [2024-11-26 20:08:01.054863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.287 [2024-11-26 20:08:01.054876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.287 [2024-11-26 20:08:01.054883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.287 [2024-11-26 20:08:01.054889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.287 [2024-11-26 20:08:01.054903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.287 qpair failed and we were unable to recover it. 00:30:00.287 [2024-11-26 20:08:01.064789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-26 20:08:01.064839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-26 20:08:01.064853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-26 20:08:01.064860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-26 20:08:01.064866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.288 [2024-11-26 20:08:01.064880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-26 20:08:01.074793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-26 20:08:01.074839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-26 20:08:01.074852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-26 20:08:01.074859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-26 20:08:01.074865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.288 [2024-11-26 20:08:01.074879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-26 20:08:01.084838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-26 20:08:01.084906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-26 20:08:01.084919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-26 20:08:01.084926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-26 20:08:01.084932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.288 [2024-11-26 20:08:01.084946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.288 [2024-11-26 20:08:01.094910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.288 [2024-11-26 20:08:01.094992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.288 [2024-11-26 20:08:01.095005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.288 [2024-11-26 20:08:01.095012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.288 [2024-11-26 20:08:01.095018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.288 [2024-11-26 20:08:01.095032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.288 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.104878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.104928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.104941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.550 [2024-11-26 20:08:01.104948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.550 [2024-11-26 20:08:01.104954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.550 [2024-11-26 20:08:01.104968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.550 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.114922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.114966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.114979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.550 [2024-11-26 20:08:01.114986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.550 [2024-11-26 20:08:01.114992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.550 [2024-11-26 20:08:01.115006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.550 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.124931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.124986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.125002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.550 [2024-11-26 20:08:01.125009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.550 [2024-11-26 20:08:01.125015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.550 [2024-11-26 20:08:01.125029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.550 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.135023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.135074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.135087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.550 [2024-11-26 20:08:01.135094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.550 [2024-11-26 20:08:01.135100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.550 [2024-11-26 20:08:01.135114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.550 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.144986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.145038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.145051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.550 [2024-11-26 20:08:01.145058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.550 [2024-11-26 20:08:01.145064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.550 [2024-11-26 20:08:01.145078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.550 qpair failed and we were unable to recover it. 00:30:00.550 [2024-11-26 20:08:01.155050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.550 [2024-11-26 20:08:01.155136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.550 [2024-11-26 20:08:01.155150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.155156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.155167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.155182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.164911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.164950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.164963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.164970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.164979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.164993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.175131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.175204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.175217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.175224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.175231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.175244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.185128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.185176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.185189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.185196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.185210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.185224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.194988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.195032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.195046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.195053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.195059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.195073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.205114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.205154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.205172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.205179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.205186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.205200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.215200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.215273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.215286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.215293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.215299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.215313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.225131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.225192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.225206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.225213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.225219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.225233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.235247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.235290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.235303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.235310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.235316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.235330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.245257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.245300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.245313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.245320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.245326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.245341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.255293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.255342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.255357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.255364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.255371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.255385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.265331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.265379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.265392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.265399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.265405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.265419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.275335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.275383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.275396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.551 [2024-11-26 20:08:01.275403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.551 [2024-11-26 20:08:01.275410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.551 [2024-11-26 20:08:01.275424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.551 qpair failed and we were unable to recover it. 00:30:00.551 [2024-11-26 20:08:01.285253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.551 [2024-11-26 20:08:01.285303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.551 [2024-11-26 20:08:01.285316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.285323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.285329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.285343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.295492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.295572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.295585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.295592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.295602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.295616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.305425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.305476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.305489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.305496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.305502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.305516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.315449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.315491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.315504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.315511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.315517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.315532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.325469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.325511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.325524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.325531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.325537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.325551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.335497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.335561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.335574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.335581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.335587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.335601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.345521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.345570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.345583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.345590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.345596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.345610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.355537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.355585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.355599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.355606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.355612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.355626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.552 [2024-11-26 20:08:01.365571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.552 [2024-11-26 20:08:01.365621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.552 [2024-11-26 20:08:01.365634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.552 [2024-11-26 20:08:01.365641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.552 [2024-11-26 20:08:01.365647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.552 [2024-11-26 20:08:01.365661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.552 qpair failed and we were unable to recover it. 00:30:00.814 [2024-11-26 20:08:01.375469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.814 [2024-11-26 20:08:01.375514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.814 [2024-11-26 20:08:01.375527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.814 [2024-11-26 20:08:01.375534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.814 [2024-11-26 20:08:01.375540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.814 [2024-11-26 20:08:01.375554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.814 qpair failed and we were unable to recover it. 00:30:00.814 [2024-11-26 20:08:01.385642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.814 [2024-11-26 20:08:01.385698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.814 [2024-11-26 20:08:01.385711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.814 [2024-11-26 20:08:01.385718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.814 [2024-11-26 20:08:01.385725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.814 [2024-11-26 20:08:01.385739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.814 qpair failed and we were unable to recover it. 00:30:00.814 [2024-11-26 20:08:01.395653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.814 [2024-11-26 20:08:01.395696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.395709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.395716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.395723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.395736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.405674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.405726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.405739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.405747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.405754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.405769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.415667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.415715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.415728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.415735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.415742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.415755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.425745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.425838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.425851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.425861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.425867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.425881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.435750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.435802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.435817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.435824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.435831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.435848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.445766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.445813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.445827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.445834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.445840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.445854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.455850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.455925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.455939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.455947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.455953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.455967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.465830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.465882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.465907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.465916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.465923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.465950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.475859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.475946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.475971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.475979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.475986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.476006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.485881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.485939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.485963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.485972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.485979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.485998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.495905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.495956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.495981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.495990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.495996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.496016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.505943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.505995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.506010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.506017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.506024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.506039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.815 [2024-11-26 20:08:01.515969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.815 [2024-11-26 20:08:01.516017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.815 [2024-11-26 20:08:01.516030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.815 [2024-11-26 20:08:01.516037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.815 [2024-11-26 20:08:01.516044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.815 [2024-11-26 20:08:01.516058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.815 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.525879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.525927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.525940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.525948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.525954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.525968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.536017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.536062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.536076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.536083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.536089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.536104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.546060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.546110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.546123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.546130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.546136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.546150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.556082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.556130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.556143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.556154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.556166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.556180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.566111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.566156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.566173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.566181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.566187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.566201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.576074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.576134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.576146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.576153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.576165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.576179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.586186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.586239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.586252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.586259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.586265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.586280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.596251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.596295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.596307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.596315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.596321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.596338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.606232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.606275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.606288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.606295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.606301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.606315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.616247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.616292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.616305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.616312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.616318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.616332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:00.816 [2024-11-26 20:08:01.626278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.816 [2024-11-26 20:08:01.626328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.816 [2024-11-26 20:08:01.626341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.816 [2024-11-26 20:08:01.626348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.816 [2024-11-26 20:08:01.626354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:00.816 [2024-11-26 20:08:01.626368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.816 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.636272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.636317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.636330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.636337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.636343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.079 [2024-11-26 20:08:01.636357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.079 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.646313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.646366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.646379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.646386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.646392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.079 [2024-11-26 20:08:01.646406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.079 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.656335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.656384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.656397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.656405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.656411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.079 [2024-11-26 20:08:01.656426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.079 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.666384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.666431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.666444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.666451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.666457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.079 [2024-11-26 20:08:01.666471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.079 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.676374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.676461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.676473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.676480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.676487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.079 [2024-11-26 20:08:01.676500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.079 qpair failed and we were unable to recover it. 00:30:01.079 [2024-11-26 20:08:01.686371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.079 [2024-11-26 20:08:01.686416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.079 [2024-11-26 20:08:01.686432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.079 [2024-11-26 20:08:01.686439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.079 [2024-11-26 20:08:01.686445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.686459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.696447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.696491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.696504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.696511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.696517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.696531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.706474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.706524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.706537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.706544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.706550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.706563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.716491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.716537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.716549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.716556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.716562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.716576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.726509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.726552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.726565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.726572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.726582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.726596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.736564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.736620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.736633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.736640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.736646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.736660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.746592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.746641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.746654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.746661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.746667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.746681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.756594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.756652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.756664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.756674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.756682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.756696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.766669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.766754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.766767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.766774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.766780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.766794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.776647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.776729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.080 [2024-11-26 20:08:01.776742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.080 [2024-11-26 20:08:01.776749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.080 [2024-11-26 20:08:01.776756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.080 [2024-11-26 20:08:01.776769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.080 qpair failed and we were unable to recover it. 00:30:01.080 [2024-11-26 20:08:01.786571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.080 [2024-11-26 20:08:01.786616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.786629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.786636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.786642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.786656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.796671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.796718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.796731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.796738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.796745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.796758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.806618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.806664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.806678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.806685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.806691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.806706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.816771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.816823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.816840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.816847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.816853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.816867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.826806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.826875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.826887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.826894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.826900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.826914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.836790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.836839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.836852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.836859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.836865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.836879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.846846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.846890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.846903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.846910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.846916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.846930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.856765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.856822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.856835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.856842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.856852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.856866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.866794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.866853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.866866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.866873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.866879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.081 [2024-11-26 20:08:01.866893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.081 qpair failed and we were unable to recover it. 00:30:01.081 [2024-11-26 20:08:01.876898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.081 [2024-11-26 20:08:01.876946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.081 [2024-11-26 20:08:01.876970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.081 [2024-11-26 20:08:01.876979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.081 [2024-11-26 20:08:01.876986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.082 [2024-11-26 20:08:01.877005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.082 [2024-11-26 20:08:01.886949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.082 [2024-11-26 20:08:01.887001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.082 [2024-11-26 20:08:01.887025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.082 [2024-11-26 20:08:01.887034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.082 [2024-11-26 20:08:01.887041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.082 [2024-11-26 20:08:01.887060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.082 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.896859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.896908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.896923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.896930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.896937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.896952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.906992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.907041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.907055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.907063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.907071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.907086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.917041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.917086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.917099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.917106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.917112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.917126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.927054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.927099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.927113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.927120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.927126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.927140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.937100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.937153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.937170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.937178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.937184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.937198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.947098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.947144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.947160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.947168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.947174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.947189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.957134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.957181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.957194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.957201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.957208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.957222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.967058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.967140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.967153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.967163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.967169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.345 [2024-11-26 20:08:01.967183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.345 qpair failed and we were unable to recover it. 00:30:01.345 [2024-11-26 20:08:01.977226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.345 [2024-11-26 20:08:01.977280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.345 [2024-11-26 20:08:01.977293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.345 [2024-11-26 20:08:01.977300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.345 [2024-11-26 20:08:01.977306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:01.977320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:01.987242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:01.987341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:01.987354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:01.987364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:01.987371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:01.987385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:01.997178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:01.997269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:01.997282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:01.997289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:01.997296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:01.997309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.007303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.007345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.007358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.007365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.007371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.007385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.017382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.017432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.017445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.017452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.017458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.017472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.027369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.027460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.027473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.027480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.027486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.027504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.037352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.037394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.037407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.037414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.037420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.037434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.047409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.047467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.047480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.047487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.047493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.047507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.057408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.057483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.057496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.057503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.057509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.346 [2024-11-26 20:08:02.057522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.346 qpair failed and we were unable to recover it. 00:30:01.346 [2024-11-26 20:08:02.067445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.346 [2024-11-26 20:08:02.067491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.346 [2024-11-26 20:08:02.067504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.346 [2024-11-26 20:08:02.067511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.346 [2024-11-26 20:08:02.067517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.067531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.077482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.077528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.077541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.077548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.077554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.077568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.087413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.087466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.087480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.087487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.087494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.087509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.097522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.097567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.097580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.097587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.097593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.097607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.107571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.107619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.107632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.107639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.107645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.107659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.117591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.117643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.117659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.117666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.117672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.117686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.127618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.127664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.127677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.127685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.127691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.127705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.137652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.137697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.137710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.137717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.137723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.137737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.147669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.147736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.147749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.147756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.147762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.347 [2024-11-26 20:08:02.147776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.347 qpair failed and we were unable to recover it. 00:30:01.347 [2024-11-26 20:08:02.157688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.347 [2024-11-26 20:08:02.157727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.347 [2024-11-26 20:08:02.157741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.347 [2024-11-26 20:08:02.157748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.347 [2024-11-26 20:08:02.157755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.348 [2024-11-26 20:08:02.157772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.348 qpair failed and we were unable to recover it. 00:30:01.619 [2024-11-26 20:08:02.167726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.619 [2024-11-26 20:08:02.167786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.619 [2024-11-26 20:08:02.167799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.619 [2024-11-26 20:08:02.167806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.619 [2024-11-26 20:08:02.167813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.619 [2024-11-26 20:08:02.167827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.619 qpair failed and we were unable to recover it. 00:30:01.619 [2024-11-26 20:08:02.177713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.619 [2024-11-26 20:08:02.177760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.619 [2024-11-26 20:08:02.177773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.619 [2024-11-26 20:08:02.177780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.619 [2024-11-26 20:08:02.177786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.619 [2024-11-26 20:08:02.177800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.619 qpair failed and we were unable to recover it. 00:30:01.619 [2024-11-26 20:08:02.187784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.619 [2024-11-26 20:08:02.187831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.619 [2024-11-26 20:08:02.187844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.619 [2024-11-26 20:08:02.187851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.619 [2024-11-26 20:08:02.187857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.619 [2024-11-26 20:08:02.187871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.619 qpair failed and we were unable to recover it. 00:30:01.619 [2024-11-26 20:08:02.197796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.197841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.197855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.197863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.197869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.197885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.207822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.207871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.207885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.207892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.207899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.207913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.217729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.217774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.217787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.217794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.217800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.217815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.227839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.227884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.227897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.227904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.227910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.227924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.237910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.237954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.237966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.237973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.237980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.237993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.247951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.248003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.248035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.248044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.248051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.248071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.257937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.257984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.257999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.258007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.258013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.258028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.267998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.268045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.268058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.268065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.268071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.268085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.278014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.278062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.278075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.278082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.278088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.278102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.620 qpair failed and we were unable to recover it. 00:30:01.620 [2024-11-26 20:08:02.288051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.620 [2024-11-26 20:08:02.288095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.620 [2024-11-26 20:08:02.288109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.620 [2024-11-26 20:08:02.288116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.620 [2024-11-26 20:08:02.288126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.620 [2024-11-26 20:08:02.288140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.298053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.298099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.298113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.298120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.298126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.298141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.308072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.308125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.308138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.308145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.308151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.308169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.318108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.318153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.318170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.318177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.318183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.318197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.328138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.328185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.328199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.328206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.328212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.328226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.338182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.338241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.338254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.338261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.338267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.338281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.348200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.348249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.348262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.348269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.348276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.348289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.358213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.358257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.358271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.358278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.358284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.358298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.368236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.368285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.368298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.368305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.368312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.368326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.378280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.378328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.378344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.378351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.378357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.378371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.388282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.388332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.388346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.388353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.388359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.388373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.398319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.398370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.398383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.621 [2024-11-26 20:08:02.398390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.621 [2024-11-26 20:08:02.398396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.621 [2024-11-26 20:08:02.398410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.621 qpair failed and we were unable to recover it. 00:30:01.621 [2024-11-26 20:08:02.408347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.621 [2024-11-26 20:08:02.408430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.621 [2024-11-26 20:08:02.408443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.622 [2024-11-26 20:08:02.408450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.622 [2024-11-26 20:08:02.408456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.622 [2024-11-26 20:08:02.408470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.622 qpair failed and we were unable to recover it. 00:30:01.622 [2024-11-26 20:08:02.418407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.622 [2024-11-26 20:08:02.418489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.622 [2024-11-26 20:08:02.418502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.622 [2024-11-26 20:08:02.418512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.622 [2024-11-26 20:08:02.418518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.622 [2024-11-26 20:08:02.418532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.622 qpair failed and we were unable to recover it. 00:30:01.622 [2024-11-26 20:08:02.428425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.622 [2024-11-26 20:08:02.428472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.622 [2024-11-26 20:08:02.428486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.622 [2024-11-26 20:08:02.428493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.622 [2024-11-26 20:08:02.428499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.622 [2024-11-26 20:08:02.428513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.622 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.438431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.961 [2024-11-26 20:08:02.438477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.961 [2024-11-26 20:08:02.438490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.961 [2024-11-26 20:08:02.438497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.961 [2024-11-26 20:08:02.438503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.961 [2024-11-26 20:08:02.438517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.961 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.448472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.961 [2024-11-26 20:08:02.448516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.961 [2024-11-26 20:08:02.448529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.961 [2024-11-26 20:08:02.448536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.961 [2024-11-26 20:08:02.448542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.961 [2024-11-26 20:08:02.448556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.961 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.458494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.961 [2024-11-26 20:08:02.458539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.961 [2024-11-26 20:08:02.458552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.961 [2024-11-26 20:08:02.458559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.961 [2024-11-26 20:08:02.458566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.961 [2024-11-26 20:08:02.458580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.961 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.468524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.961 [2024-11-26 20:08:02.468616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.961 [2024-11-26 20:08:02.468629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.961 [2024-11-26 20:08:02.468636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.961 [2024-11-26 20:08:02.468642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.961 [2024-11-26 20:08:02.468656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.961 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.478430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.961 [2024-11-26 20:08:02.478478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.961 [2024-11-26 20:08:02.478492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.961 [2024-11-26 20:08:02.478499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.961 [2024-11-26 20:08:02.478506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8918000b90 00:30:01.961 [2024-11-26 20:08:02.478520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.961 qpair failed and we were unable to recover it. 00:30:01.961 [2024-11-26 20:08:02.478679] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:01.961 A controller has encountered a failure and is being reset. 00:30:01.961 [2024-11-26 20:08:02.478806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfce10 (9): Bad file descriptor 00:30:01.961 Controller properly reset. 00:30:01.961 Initializing NVMe Controllers 00:30:01.961 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:01.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:01.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:01.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:01.961 Initialization complete. Launching workers. 00:30:01.961 Starting thread on core 1 00:30:01.961 Starting thread on core 2 00:30:01.961 Starting thread on core 3 00:30:01.961 Starting thread on core 0 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:01.961 00:30:01.961 real 0m11.468s 00:30:01.961 user 0m21.431s 00:30:01.961 sys 0m4.179s 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.961 ************************************ 00:30:01.961 END TEST nvmf_target_disconnect_tc2 00:30:01.961 ************************************ 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.961 rmmod nvme_tcp 00:30:01.961 rmmod nvme_fabrics 00:30:01.961 rmmod nvme_keyring 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3848990 ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3848990 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3848990 ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3848990 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3848990 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3848990' 00:30:01.961 killing process with pid 3848990 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3848990 00:30:01.961 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3848990 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.223 20:08:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.167 00:30:04.167 real 0m21.869s 00:30:04.167 user 0m49.341s 00:30:04.167 sys 0m10.361s 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.167 ************************************ 00:30:04.167 END TEST nvmf_target_disconnect 00:30:04.167 ************************************ 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:04.167 00:30:04.167 real 6m33.751s 00:30:04.167 user 11m33.314s 00:30:04.167 sys 2m16.742s 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.167 20:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.167 ************************************ 00:30:04.167 END TEST nvmf_host 00:30:04.167 ************************************ 00:30:04.429 20:08:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:04.429 20:08:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:04.429 20:08:05 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:04.429 20:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.429 20:08:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.429 20:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:04.429 ************************************ 00:30:04.429 START TEST nvmf_target_core_interrupt_mode 00:30:04.429 ************************************ 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:04.429 * Looking for test storage... 00:30:04.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.429 --rc genhtml_branch_coverage=1 00:30:04.429 --rc genhtml_function_coverage=1 00:30:04.429 --rc genhtml_legend=1 00:30:04.429 --rc geninfo_all_blocks=1 00:30:04.429 --rc geninfo_unexecuted_blocks=1 00:30:04.429 00:30:04.429 ' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.429 --rc genhtml_branch_coverage=1 00:30:04.429 --rc genhtml_function_coverage=1 00:30:04.429 --rc genhtml_legend=1 00:30:04.429 --rc geninfo_all_blocks=1 00:30:04.429 --rc geninfo_unexecuted_blocks=1 00:30:04.429 00:30:04.429 ' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.429 --rc genhtml_branch_coverage=1 00:30:04.429 --rc genhtml_function_coverage=1 00:30:04.429 --rc genhtml_legend=1 00:30:04.429 --rc geninfo_all_blocks=1 00:30:04.429 --rc geninfo_unexecuted_blocks=1 00:30:04.429 00:30:04.429 ' 00:30:04.429 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.429 --rc genhtml_branch_coverage=1 00:30:04.429 --rc genhtml_function_coverage=1 00:30:04.429 --rc genhtml_legend=1 00:30:04.429 --rc geninfo_all_blocks=1 00:30:04.429 --rc geninfo_unexecuted_blocks=1 00:30:04.429 00:30:04.429 ' 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.689 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.690 ************************************ 00:30:04.690 START TEST nvmf_abort 00:30:04.690 ************************************ 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:04.690 * Looking for test storage... 00:30:04.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.690 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.951 --rc genhtml_branch_coverage=1 00:30:04.951 --rc genhtml_function_coverage=1 00:30:04.951 --rc genhtml_legend=1 00:30:04.951 --rc geninfo_all_blocks=1 00:30:04.951 --rc geninfo_unexecuted_blocks=1 00:30:04.951 00:30:04.951 ' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.951 --rc genhtml_branch_coverage=1 00:30:04.951 --rc genhtml_function_coverage=1 00:30:04.951 --rc genhtml_legend=1 00:30:04.951 --rc geninfo_all_blocks=1 00:30:04.951 --rc geninfo_unexecuted_blocks=1 00:30:04.951 00:30:04.951 ' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.951 --rc genhtml_branch_coverage=1 00:30:04.951 --rc genhtml_function_coverage=1 00:30:04.951 --rc genhtml_legend=1 00:30:04.951 --rc geninfo_all_blocks=1 00:30:04.951 --rc geninfo_unexecuted_blocks=1 00:30:04.951 00:30:04.951 ' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.951 --rc genhtml_branch_coverage=1 00:30:04.951 --rc genhtml_function_coverage=1 00:30:04.951 --rc genhtml_legend=1 00:30:04.951 --rc geninfo_all_blocks=1 00:30:04.951 --rc geninfo_unexecuted_blocks=1 00:30:04.951 00:30:04.951 ' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.951 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.952 20:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.096 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:13.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:13.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:13.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:13.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.097 20:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.097 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.097 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.097 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:30:13.097 00:30:13.097 --- 10.0.0.2 ping statistics --- 00:30:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.097 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:30:13.097 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:30:13.097 00:30:13.097 --- 10.0.0.1 ping statistics --- 00:30:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.097 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:13.097 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3854496 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3854496 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3854496 ']' 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.098 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 [2024-11-26 20:08:13.148322] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:13.098 [2024-11-26 20:08:13.149436] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:30:13.098 [2024-11-26 20:08:13.149485] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.098 [2024-11-26 20:08:13.233786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.098 [2024-11-26 20:08:13.285355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.098 [2024-11-26 20:08:13.285403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.098 [2024-11-26 20:08:13.285411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.098 [2024-11-26 20:08:13.285419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.098 [2024-11-26 20:08:13.285425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.098 [2024-11-26 20:08:13.287386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.098 [2024-11-26 20:08:13.287547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.098 [2024-11-26 20:08:13.287548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.098 [2024-11-26 20:08:13.365317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:13.098 [2024-11-26 20:08:13.366411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:13.098 [2024-11-26 20:08:13.367012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.098 [2024-11-26 20:08:13.367134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:13.360 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.360 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:13.360 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.360 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.360 20:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 [2024-11-26 20:08:14.024444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 Malloc0 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 Delay0 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 [2024-11-26 20:08:14.124429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.360 20:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:13.621 [2024-11-26 20:08:14.268897] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:16.167 Initializing NVMe Controllers 00:30:16.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:16.167 controller IO queue size 128 less than required 00:30:16.167 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:16.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:16.167 Initialization complete. Launching workers. 00:30:16.167 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28624 00:30:16.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28681, failed to submit 66 00:30:16.167 success 28624, unsuccessful 57, failed 0 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.168 rmmod nvme_tcp 00:30:16.168 rmmod nvme_fabrics 00:30:16.168 rmmod nvme_keyring 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3854496 ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3854496 ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854496' 00:30:16.168 killing process with pid 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3854496 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.168 20:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.081 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.081 00:30:18.081 real 0m13.559s 00:30:18.081 user 0m11.506s 00:30:18.081 sys 0m6.959s 00:30:18.081 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.081 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.081 ************************************ 00:30:18.081 END TEST nvmf_abort 00:30:18.081 ************************************ 00:30:18.342 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:18.342 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:18.342 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.342 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:18.342 ************************************ 00:30:18.342 START TEST nvmf_ns_hotplug_stress 00:30:18.342 ************************************ 00:30:18.342 20:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:18.342 * Looking for test storage... 00:30:18.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.342 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.604 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.604 --rc genhtml_branch_coverage=1 00:30:18.604 --rc genhtml_function_coverage=1 00:30:18.604 --rc genhtml_legend=1 00:30:18.605 --rc geninfo_all_blocks=1 00:30:18.605 --rc geninfo_unexecuted_blocks=1 00:30:18.605 00:30:18.605 ' 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.605 --rc genhtml_branch_coverage=1 00:30:18.605 --rc genhtml_function_coverage=1 00:30:18.605 --rc genhtml_legend=1 00:30:18.605 --rc geninfo_all_blocks=1 00:30:18.605 --rc geninfo_unexecuted_blocks=1 00:30:18.605 00:30:18.605 ' 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.605 --rc genhtml_branch_coverage=1 00:30:18.605 --rc genhtml_function_coverage=1 00:30:18.605 --rc genhtml_legend=1 00:30:18.605 --rc geninfo_all_blocks=1 00:30:18.605 --rc geninfo_unexecuted_blocks=1 00:30:18.605 00:30:18.605 ' 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.605 --rc genhtml_branch_coverage=1 00:30:18.605 --rc genhtml_function_coverage=1 00:30:18.605 --rc genhtml_legend=1 00:30:18.605 --rc geninfo_all_blocks=1 00:30:18.605 --rc geninfo_unexecuted_blocks=1 00:30:18.605 00:30:18.605 ' 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.605 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.606 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.607 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.608 20:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:26.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:26.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:26.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:26.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.763 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:30:26.764 00:30:26.764 --- 10.0.0.2 ping statistics --- 00:30:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.764 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:30:26.764 00:30:26.764 --- 10.0.0.1 ping statistics --- 00:30:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.764 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3859278 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3859278 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3859278 ']' 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.764 20:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.764 [2024-11-26 20:08:26.752531] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.764 [2024-11-26 20:08:26.753680] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:30:26.764 [2024-11-26 20:08:26.753734] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.764 [2024-11-26 20:08:26.853068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.764 [2024-11-26 20:08:26.904055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.764 [2024-11-26 20:08:26.904105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.764 [2024-11-26 20:08:26.904114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.764 [2024-11-26 20:08:26.904121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.764 [2024-11-26 20:08:26.904128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.764 [2024-11-26 20:08:26.906203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.764 [2024-11-26 20:08:26.906393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.764 [2024-11-26 20:08:26.906393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.764 [2024-11-26 20:08:26.985504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.764 [2024-11-26 20:08:26.986490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:26.764 [2024-11-26 20:08:26.987097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:26.764 [2024-11-26 20:08:26.987251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:26.764 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.764 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:26.764 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.764 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.764 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:27.025 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.025 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:27.025 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:27.025 [2024-11-26 20:08:27.763352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.025 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:27.286 20:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.547 [2024-11-26 20:08:28.144205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.547 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.547 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:27.808 Malloc0 00:30:27.808 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:28.070 Delay0 00:30:28.070 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.332 20:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:28.332 NULL1 00:30:28.332 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:28.593 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3859880 00:30:28.593 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:28.593 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:28.593 20:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.980 Read completed with error (sct=0, sc=11) 00:30:29.980 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.980 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:29.980 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:30.240 true 00:30:30.240 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:30.240 20:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.182 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.182 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:31.182 20:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:31.442 true 00:30:31.442 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:31.442 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.702 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.702 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:31.702 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:31.962 true 00:30:31.962 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:31.962 20:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 20:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.166 20:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:33.166 20:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:33.426 true 00:30:33.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:33.426 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:34.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:34.368 20:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.368 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:34.368 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:34.628 true 00:30:34.628 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:34.628 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.889 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.889 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:34.889 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:35.150 true 00:30:35.150 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:35.150 20:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.410 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.410 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:35.670 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:35.670 true 00:30:35.670 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:35.670 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.930 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.191 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:36.191 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:36.191 true 00:30:36.191 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:36.191 20:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.576 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:37.576 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:37.836 true 00:30:37.836 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:37.836 20:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.775 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.775 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:38.775 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:39.035 true 00:30:39.035 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:39.035 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.035 20:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.295 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:39.295 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:39.555 true 00:30:39.555 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:39.555 20:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 20:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.935 20:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:40.935 20:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:40.935 true 00:30:41.195 20:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:41.195 20:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.137 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.137 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:42.137 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:42.137 true 00:30:42.397 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:42.397 20:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.397 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.658 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:42.658 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:42.919 true 00:30:42.919 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:42.919 20:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.861 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:44.124 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:44.124 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:44.385 true 00:30:44.385 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:44.385 20:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.329 20:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:45.329 20:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:45.329 20:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:45.589 true 00:30:45.589 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:45.589 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.589 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.850 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:45.850 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:46.110 true 00:30:46.110 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:46.110 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.111 20:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.372 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:46.372 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:46.633 true 00:30:46.633 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:46.633 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.633 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.894 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:46.894 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:47.156 true 00:30:47.156 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:47.156 20:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.417 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.417 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:47.417 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:47.679 true 00:30:47.679 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:47.679 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.939 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.940 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:47.940 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:48.201 true 00:30:48.201 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:48.201 20:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:49.584 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:49.584 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:49.846 true 00:30:49.846 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:49.846 20:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.789 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:50.789 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:50.789 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:51.049 true 00:30:51.049 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:51.049 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.049 20:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.310 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:51.310 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:51.571 true 00:30:51.571 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:51.571 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.571 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.833 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:51.833 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:52.093 true 00:30:52.093 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:52.093 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.355 20:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.355 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:52.355 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:52.616 true 00:30:52.616 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:52.616 20:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:53.997 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:53.997 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:53.997 true 00:30:53.997 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:53.997 20:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.937 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.198 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:55.198 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:55.198 true 00:30:55.198 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:55.198 20:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.458 20:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.719 20:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:55.719 20:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:55.719 true 00:30:55.719 20:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:55.719 20:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 20:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.102 20:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:57.102 20:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:57.363 true 00:30:57.363 20:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:57.363 20:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.304 20:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.304 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:58.304 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:58.563 true 00:30:58.563 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:58.563 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.822 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.822 Initializing NVMe Controllers 00:30:58.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.822 Controller IO queue size 128, less than required. 00:30:58.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.822 Controller IO queue size 128, less than required. 00:30:58.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:58.822 Initialization complete. Launching workers. 00:30:58.822 ======================================================== 00:30:58.822 Latency(us) 00:30:58.822 Device Information : IOPS MiB/s Average min max 00:30:58.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2139.07 1.04 36321.71 1839.92 1065943.55 00:30:58.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17527.52 8.56 7302.47 1124.53 341613.71 00:30:58.822 ======================================================== 00:30:58.822 Total : 19666.59 9.60 10458.80 1124.53 1065943.55 00:30:58.822 00:30:58.822 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:58.822 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:59.082 true 00:30:59.082 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3859880 00:30:59.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3859880) - No such process 00:30:59.082 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3859880 00:30:59.082 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.341 20:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:59.602 null0 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:59.602 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:59.862 null1 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:59.862 null2 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:59.862 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:00.122 null3 00:31:00.122 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.122 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.122 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:00.382 null4 00:31:00.382 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.382 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.382 20:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:00.382 null5 00:31:00.382 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.382 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.382 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:00.642 null6 00:31:00.642 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.642 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.642 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:00.903 null7 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:00.903 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3866062 3866064 3866065 3866068 3866072 3866074 3866076 3866078 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.904 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.164 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.165 20:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.425 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:01.686 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:01.687 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.948 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:01.949 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.211 20:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.211 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.211 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.211 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.504 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.505 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.818 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.819 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.140 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.141 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.422 20:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.422 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.683 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.684 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.945 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.208 20:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.470 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.471 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.731 rmmod nvme_tcp 00:31:04.731 rmmod nvme_fabrics 00:31:04.731 rmmod nvme_keyring 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3859278 ']' 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3859278 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3859278 ']' 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3859278 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.731 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859278 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859278' 00:31:04.992 killing process with pid 3859278 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3859278 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3859278 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.992 20:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.539 00:31:07.539 real 0m48.834s 00:31:07.539 user 2m58.000s 00:31:07.539 sys 0m20.626s 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:07.539 ************************************ 00:31:07.539 END TEST nvmf_ns_hotplug_stress 00:31:07.539 ************************************ 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:07.539 ************************************ 00:31:07.539 START TEST nvmf_delete_subsystem 00:31:07.539 ************************************ 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:07.539 * Looking for test storage... 00:31:07.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:07.539 20:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:07.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.539 --rc genhtml_branch_coverage=1 00:31:07.539 --rc genhtml_function_coverage=1 00:31:07.539 --rc genhtml_legend=1 00:31:07.539 --rc geninfo_all_blocks=1 00:31:07.539 --rc geninfo_unexecuted_blocks=1 00:31:07.539 00:31:07.539 ' 00:31:07.539 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.540 --rc genhtml_branch_coverage=1 00:31:07.540 --rc genhtml_function_coverage=1 00:31:07.540 --rc genhtml_legend=1 00:31:07.540 --rc geninfo_all_blocks=1 00:31:07.540 --rc geninfo_unexecuted_blocks=1 00:31:07.540 00:31:07.540 ' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.540 --rc genhtml_branch_coverage=1 00:31:07.540 --rc genhtml_function_coverage=1 00:31:07.540 --rc genhtml_legend=1 00:31:07.540 --rc geninfo_all_blocks=1 00:31:07.540 --rc geninfo_unexecuted_blocks=1 00:31:07.540 00:31:07.540 ' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:07.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.540 --rc genhtml_branch_coverage=1 00:31:07.540 --rc genhtml_function_coverage=1 00:31:07.540 --rc genhtml_legend=1 00:31:07.540 --rc geninfo_all_blocks=1 00:31:07.540 --rc geninfo_unexecuted_blocks=1 00:31:07.540 00:31:07.540 ' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.540 20:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:15.697 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.697 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:15.698 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:15.698 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:15.698 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:31:15.698 00:31:15.698 --- 10.0.0.2 ping statistics --- 00:31:15.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.698 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:31:15.698 00:31:15.698 --- 10.0.0.1 ping statistics --- 00:31:15.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.698 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3871707 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3871707 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3871707 ']' 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.698 20:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 [2024-11-26 20:09:15.680968] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:15.698 [2024-11-26 20:09:15.682083] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:31:15.698 [2024-11-26 20:09:15.682137] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.698 [2024-11-26 20:09:15.781235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.699 [2024-11-26 20:09:15.832530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.699 [2024-11-26 20:09:15.832583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.699 [2024-11-26 20:09:15.832591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.699 [2024-11-26 20:09:15.832598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.699 [2024-11-26 20:09:15.832604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.699 [2024-11-26 20:09:15.834233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.699 [2024-11-26 20:09:15.834266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.699 [2024-11-26 20:09:15.912517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:15.699 [2024-11-26 20:09:15.913259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:15.699 [2024-11-26 20:09:15.913458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.699 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.699 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:15.699 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.699 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.699 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.961 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.961 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.961 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 [2024-11-26 20:09:16.547307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 [2024-11-26 20:09:16.579724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 NULL1 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 Delay0 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3871870 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:15.962 20:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:15.962 [2024-11-26 20:09:16.705271] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:17.875 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.875 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.875 20:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Write completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 starting I/O failed: -6 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.136 [2024-11-26 20:09:18.844389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec82c0 is same with the state(6) to be set 00:31:18.136 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 [2024-11-26 20:09:18.846756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec8680 is same with the state(6) to be set 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 starting I/O failed: -6 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 [2024-11-26 20:09:18.847657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292000d490 is same with the state(6) to be set 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Write completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:18.137 Read completed with error (sct=0, sc=8) 00:31:19.079 [2024-11-26 20:09:19.803600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec99b0 is same with the state(6) to be set 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 [2024-11-26 20:09:19.848524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec8860 is same with the state(6) to be set 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 [2024-11-26 20:09:19.848779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec84a0 is same with the state(6) to be set 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 [2024-11-26 20:09:19.849109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292000d020 is same with the state(6) to be set 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Write completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 Read completed with error (sct=0, sc=8) 00:31:19.079 [2024-11-26 20:09:19.849458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f292000d7c0 is same with the state(6) to be set 00:31:19.079 Initializing NVMe Controllers 00:31:19.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.079 Controller IO queue size 128, less than required. 00:31:19.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:19.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:19.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:19.079 Initialization complete. Launching workers. 00:31:19.079 ======================================================== 00:31:19.079 Latency(us) 00:31:19.079 Device Information : IOPS MiB/s Average min max 00:31:19.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.98 0.08 919965.55 1595.43 1044853.08 00:31:19.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.05 0.07 1049760.85 272.16 2004223.08 00:31:19.079 ======================================================== 00:31:19.079 Total : 309.04 0.15 982567.78 272.16 2004223.08 00:31:19.079 00:31:19.079 [2024-11-26 20:09:19.850005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec99b0 (9): Bad file descriptor 00:31:19.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:19.079 20:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.079 20:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:19.079 20:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3871870 00:31:19.079 20:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3871870 00:31:19.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3871870) - No such process 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3871870 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3871870 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3871870 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:19.651 [2024-11-26 20:09:20.383745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3872603 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:19.651 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:19.912 [2024-11-26 20:09:20.481962] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:20.173 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:20.173 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:20.173 20:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:20.742 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:20.742 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:20.742 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.313 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.313 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:21.313 20:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.883 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.883 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:21.883 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.144 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.144 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:22.144 20:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.715 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.715 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:22.715 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.975 Initializing NVMe Controllers 00:31:22.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.975 Controller IO queue size 128, less than required. 00:31:22.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:22.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:22.975 Initialization complete. Launching workers. 00:31:22.975 ======================================================== 00:31:22.975 Latency(us) 00:31:22.975 Device Information : IOPS MiB/s Average min max 00:31:22.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002533.40 1000224.30 1007030.33 00:31:22.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004363.93 1000288.59 1012244.82 00:31:22.975 ======================================================== 00:31:22.975 Total : 256.00 0.12 1003448.66 1000224.30 1012244.82 00:31:22.975 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3872603 00:31:23.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3872603) - No such process 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3872603 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.235 rmmod nvme_tcp 00:31:23.235 rmmod nvme_fabrics 00:31:23.235 rmmod nvme_keyring 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:23.235 20:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3871707 ']' 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3871707 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3871707 ']' 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3871707 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.235 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3871707 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3871707' 00:31:23.496 killing process with pid 3871707 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3871707 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3871707 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.496 20:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.040 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.040 00:31:26.040 real 0m18.374s 00:31:26.040 user 0m26.479s 00:31:26.040 sys 0m7.556s 00:31:26.040 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.041 ************************************ 00:31:26.041 END TEST nvmf_delete_subsystem 00:31:26.041 ************************************ 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.041 ************************************ 00:31:26.041 START TEST nvmf_host_management 00:31:26.041 ************************************ 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.041 * Looking for test storage... 00:31:26.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:26.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.041 --rc genhtml_branch_coverage=1 00:31:26.041 --rc genhtml_function_coverage=1 00:31:26.041 --rc genhtml_legend=1 00:31:26.041 --rc geninfo_all_blocks=1 00:31:26.041 --rc geninfo_unexecuted_blocks=1 00:31:26.041 00:31:26.041 ' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:26.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.041 --rc genhtml_branch_coverage=1 00:31:26.041 --rc genhtml_function_coverage=1 00:31:26.041 --rc genhtml_legend=1 00:31:26.041 --rc geninfo_all_blocks=1 00:31:26.041 --rc geninfo_unexecuted_blocks=1 00:31:26.041 00:31:26.041 ' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:26.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.041 --rc genhtml_branch_coverage=1 00:31:26.041 --rc genhtml_function_coverage=1 00:31:26.041 --rc genhtml_legend=1 00:31:26.041 --rc geninfo_all_blocks=1 00:31:26.041 --rc geninfo_unexecuted_blocks=1 00:31:26.041 00:31:26.041 ' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:26.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.041 --rc genhtml_branch_coverage=1 00:31:26.041 --rc genhtml_function_coverage=1 00:31:26.041 --rc genhtml_legend=1 00:31:26.041 --rc geninfo_all_blocks=1 00:31:26.041 --rc geninfo_unexecuted_blocks=1 00:31:26.041 00:31:26.041 ' 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:26.041 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.042 20:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.185 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:34.186 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:34.186 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:34.186 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:34.186 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.186 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.187 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.187 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.187 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.187 20:09:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:31:34.187 00:31:34.187 --- 10.0.0.2 ping statistics --- 00:31:34.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.187 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:31:34.187 00:31:34.187 --- 10.0.0.1 ping statistics --- 00:31:34.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.187 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3877424 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3877424 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3877424 ']' 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.187 [2024-11-26 20:09:34.166227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.187 [2024-11-26 20:09:34.167359] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:31:34.187 [2024-11-26 20:09:34.167412] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.187 [2024-11-26 20:09:34.265441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.187 [2024-11-26 20:09:34.317969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.187 [2024-11-26 20:09:34.318017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.187 [2024-11-26 20:09:34.318026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.187 [2024-11-26 20:09:34.318033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.187 [2024-11-26 20:09:34.318039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.187 [2024-11-26 20:09:34.320099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.187 [2024-11-26 20:09:34.320236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.187 [2024-11-26 20:09:34.320409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:34.187 [2024-11-26 20:09:34.320410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.187 [2024-11-26 20:09:34.398991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:34.187 [2024-11-26 20:09:34.400084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:34.187 [2024-11-26 20:09:34.400193] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:34.187 [2024-11-26 20:09:34.400824] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:34.187 [2024-11-26 20:09:34.400864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.187 20:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.448 [2024-11-26 20:09:35.017651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.448 Malloc0 00:31:34.448 [2024-11-26 20:09:35.125973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.448 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3877764 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3877764 /var/tmp/bdevperf.sock 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3877764 ']' 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:34.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.449 { 00:31:34.449 "params": { 00:31:34.449 "name": "Nvme$subsystem", 00:31:34.449 "trtype": "$TEST_TRANSPORT", 00:31:34.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.449 "adrfam": "ipv4", 00:31:34.449 "trsvcid": "$NVMF_PORT", 00:31:34.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.449 "hdgst": ${hdgst:-false}, 00:31:34.449 "ddgst": ${ddgst:-false} 00:31:34.449 }, 00:31:34.449 "method": "bdev_nvme_attach_controller" 00:31:34.449 } 00:31:34.449 EOF 00:31:34.449 )") 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:34.449 20:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.449 "params": { 00:31:34.449 "name": "Nvme0", 00:31:34.449 "trtype": "tcp", 00:31:34.449 "traddr": "10.0.0.2", 00:31:34.449 "adrfam": "ipv4", 00:31:34.449 "trsvcid": "4420", 00:31:34.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.449 "hdgst": false, 00:31:34.449 "ddgst": false 00:31:34.449 }, 00:31:34.449 "method": "bdev_nvme_attach_controller" 00:31:34.449 }' 00:31:34.449 [2024-11-26 20:09:35.235684] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:31:34.449 [2024-11-26 20:09:35.235767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877764 ] 00:31:34.710 [2024-11-26 20:09:35.333087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.710 [2024-11-26 20:09:35.386213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.970 Running I/O for 10 seconds... 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=615 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 615 -ge 100 ']' 00:31:35.543 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.544 [2024-11-26 20:09:36.133308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1995f20 is same with the state(6) to be set 00:31:35.544 [2024-11-26 20:09:36.133673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.544 [2024-11-26 20:09:36.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.133738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.544 [2024-11-26 20:09:36.133746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.133755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.544 [2024-11-26 20:09:36.133763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.133772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:35.544 [2024-11-26 20:09:36.133780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.133788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd010 is same with the state(6) to be set 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.544 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.544 [2024-11-26 20:09:36.141028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.544 [2024-11-26 20:09:36.141584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.544 [2024-11-26 20:09:36.141594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.141984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.141991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.545 [2024-11-26 20:09:36.142125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.545 [2024-11-26 20:09:36.142133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.546 [2024-11-26 20:09:36.142142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.546 [2024-11-26 20:09:36.142150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.546 [2024-11-26 20:09:36.142166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.546 [2024-11-26 20:09:36.142175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.546 [2024-11-26 20:09:36.142184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.546 [2024-11-26 20:09:36.142192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.546 [2024-11-26 20:09:36.142202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.546 [2024-11-26 20:09:36.142209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.546 [2024-11-26 20:09:36.143489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:35.546 task offset: 89344 on job bdev=Nvme0n1 fails 00:31:35.546 00:31:35.546 Latency(us) 00:31:35.546 [2024-11-26T19:09:36.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.546 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:35.546 Job: Nvme0n1 ended in about 0.43 seconds with error 00:31:35.546 Verification LBA range: start 0x0 length 0x400 00:31:35.546 Nvme0n1 : 0.43 1639.17 102.45 150.30 0.00 34663.95 1672.53 34078.72 00:31:35.546 [2024-11-26T19:09:36.367Z] =================================================================================================================== 00:31:35.546 [2024-11-26T19:09:36.367Z] Total : 1639.17 102.45 150.30 0.00 34663.95 1672.53 34078.72 00:31:35.546 [2024-11-26 20:09:36.145699] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:35.546 [2024-11-26 20:09:36.145736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd010 (9): Bad file descriptor 00:31:35.546 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.546 20:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:35.546 [2024-11-26 20:09:36.193580] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3877764 00:31:36.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3877764) - No such process 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:36.489 { 00:31:36.489 "params": { 00:31:36.489 "name": "Nvme$subsystem", 00:31:36.489 "trtype": "$TEST_TRANSPORT", 00:31:36.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.489 "adrfam": "ipv4", 00:31:36.489 "trsvcid": "$NVMF_PORT", 00:31:36.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.489 "hdgst": ${hdgst:-false}, 00:31:36.489 "ddgst": ${ddgst:-false} 00:31:36.489 }, 00:31:36.489 "method": "bdev_nvme_attach_controller" 00:31:36.489 } 00:31:36.489 EOF 00:31:36.489 )") 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:36.489 20:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:36.489 "params": { 00:31:36.489 "name": "Nvme0", 00:31:36.489 "trtype": "tcp", 00:31:36.489 "traddr": "10.0.0.2", 00:31:36.489 "adrfam": "ipv4", 00:31:36.489 "trsvcid": "4420", 00:31:36.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.489 "hdgst": false, 00:31:36.489 "ddgst": false 00:31:36.489 }, 00:31:36.489 "method": "bdev_nvme_attach_controller" 00:31:36.489 }' 00:31:36.489 [2024-11-26 20:09:37.215973] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:31:36.489 [2024-11-26 20:09:37.216053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878143 ] 00:31:36.751 [2024-11-26 20:09:37.307041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.751 [2024-11-26 20:09:37.359205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.751 Running I/O for 1 seconds... 00:31:38.136 2063.00 IOPS, 128.94 MiB/s 00:31:38.136 Latency(us) 00:31:38.136 [2024-11-26T19:09:38.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.136 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.136 Verification LBA range: start 0x0 length 0x400 00:31:38.136 Nvme0n1 : 1.02 2090.97 130.69 0.00 0.00 29900.28 3399.68 34297.17 00:31:38.136 [2024-11-26T19:09:38.957Z] =================================================================================================================== 00:31:38.136 [2024-11-26T19:09:38.957Z] Total : 2090.97 130.69 0.00 0.00 29900.28 3399.68 34297.17 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.136 rmmod nvme_tcp 00:31:38.136 rmmod nvme_fabrics 00:31:38.136 rmmod nvme_keyring 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3877424 ']' 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3877424 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3877424 ']' 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3877424 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3877424 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3877424' 00:31:38.136 killing process with pid 3877424 00:31:38.136 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3877424 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3877424 00:31:38.137 [2024-11-26 20:09:38.901886] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.137 20:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:40.684 00:31:40.684 real 0m14.667s 00:31:40.684 user 0m19.141s 00:31:40.684 sys 0m7.594s 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.684 ************************************ 00:31:40.684 END TEST nvmf_host_management 00:31:40.684 ************************************ 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.684 ************************************ 00:31:40.684 START TEST nvmf_lvol 00:31:40.684 ************************************ 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:40.684 * Looking for test storage... 00:31:40.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:40.684 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.685 --rc genhtml_branch_coverage=1 00:31:40.685 --rc genhtml_function_coverage=1 00:31:40.685 --rc genhtml_legend=1 00:31:40.685 --rc geninfo_all_blocks=1 00:31:40.685 --rc geninfo_unexecuted_blocks=1 00:31:40.685 00:31:40.685 ' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.685 --rc genhtml_branch_coverage=1 00:31:40.685 --rc genhtml_function_coverage=1 00:31:40.685 --rc genhtml_legend=1 00:31:40.685 --rc geninfo_all_blocks=1 00:31:40.685 --rc geninfo_unexecuted_blocks=1 00:31:40.685 00:31:40.685 ' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.685 --rc genhtml_branch_coverage=1 00:31:40.685 --rc genhtml_function_coverage=1 00:31:40.685 --rc genhtml_legend=1 00:31:40.685 --rc geninfo_all_blocks=1 00:31:40.685 --rc geninfo_unexecuted_blocks=1 00:31:40.685 00:31:40.685 ' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.685 --rc genhtml_branch_coverage=1 00:31:40.685 --rc genhtml_function_coverage=1 00:31:40.685 --rc genhtml_legend=1 00:31:40.685 --rc geninfo_all_blocks=1 00:31:40.685 --rc geninfo_unexecuted_blocks=1 00:31:40.685 00:31:40.685 ' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.685 20:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:48.837 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:48.837 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:48.837 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:48.837 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.837 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:31:48.838 00:31:48.838 --- 10.0.0.2 ping statistics --- 00:31:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.838 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:31:48.838 00:31:48.838 --- 10.0.0.1 ping statistics --- 00:31:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.838 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3882485 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3882485 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3882485 ']' 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.838 20:09:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:48.838 [2024-11-26 20:09:48.966132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.838 [2024-11-26 20:09:48.967252] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:31:48.838 [2024-11-26 20:09:48.967303] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.838 [2024-11-26 20:09:49.068808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.838 [2024-11-26 20:09:49.121933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.838 [2024-11-26 20:09:49.121983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.838 [2024-11-26 20:09:49.121992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.838 [2024-11-26 20:09:49.121999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.838 [2024-11-26 20:09:49.122006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.838 [2024-11-26 20:09:49.123874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.838 [2024-11-26 20:09:49.124034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.838 [2024-11-26 20:09:49.124035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.838 [2024-11-26 20:09:49.202049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:48.838 [2024-11-26 20:09:49.203122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:48.838 [2024-11-26 20:09:49.203486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:48.838 [2024-11-26 20:09:49.203616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.099 20:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.361 [2024-11-26 20:09:49.992923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.361 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:49.622 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:49.622 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:49.883 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:49.883 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:49.883 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:50.144 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e3356582-5da1-41dc-9cfa-afbe793a2abe 00:31:50.144 20:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3356582-5da1-41dc-9cfa-afbe793a2abe lvol 20 00:31:50.406 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=80ac33e8-08d1-4f06-9d13-1340989476d1 00:31:50.406 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:50.669 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80ac33e8-08d1-4f06-9d13-1340989476d1 00:31:50.669 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.931 [2024-11-26 20:09:51.560858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.931 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.192 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3883183 00:31:51.192 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:51.192 20:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:52.135 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 80ac33e8-08d1-4f06-9d13-1340989476d1 MY_SNAPSHOT 00:31:52.397 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=85e3aef1-6c06-44eb-89be-948c4bc978c4 00:31:52.397 20:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 80ac33e8-08d1-4f06-9d13-1340989476d1 30 00:31:52.658 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 85e3aef1-6c06-44eb-89be-948c4bc978c4 MY_CLONE 00:31:52.658 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e7db82e0-febe-4401-ae8e-2566310de304 00:31:52.658 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e7db82e0-febe-4401-ae8e-2566310de304 00:31:53.230 20:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3883183 00:32:01.371 Initializing NVMe Controllers 00:32:01.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:01.371 Controller IO queue size 128, less than required. 00:32:01.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:01.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:01.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:01.371 Initialization complete. Launching workers. 00:32:01.371 ======================================================== 00:32:01.371 Latency(us) 00:32:01.371 Device Information : IOPS MiB/s Average min max 00:32:01.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15203.90 59.39 8418.85 1909.61 80479.00 00:32:01.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15359.60 60.00 8334.30 647.08 61932.07 00:32:01.371 ======================================================== 00:32:01.371 Total : 30563.50 119.39 8376.36 647.08 80479.00 00:32:01.371 00:32:01.371 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:01.632 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80ac33e8-08d1-4f06-9d13-1340989476d1 00:32:02.001 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3356582-5da1-41dc-9cfa-afbe793a2abe 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.002 rmmod nvme_tcp 00:32:02.002 rmmod nvme_fabrics 00:32:02.002 rmmod nvme_keyring 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3882485 ']' 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3882485 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3882485 ']' 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3882485 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.002 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3882485 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3882485' 00:32:02.306 killing process with pid 3882485 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3882485 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3882485 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.306 20:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.219 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.219 00:32:04.219 real 0m23.927s 00:32:04.219 user 0m55.892s 00:32:04.219 sys 0m10.747s 00:32:04.219 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.219 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:04.219 ************************************ 00:32:04.219 END TEST nvmf_lvol 00:32:04.219 ************************************ 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:04.480 ************************************ 00:32:04.480 START TEST nvmf_lvs_grow 00:32:04.480 ************************************ 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:04.480 * Looking for test storage... 00:32:04.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.480 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.481 --rc genhtml_branch_coverage=1 00:32:04.481 --rc genhtml_function_coverage=1 00:32:04.481 --rc genhtml_legend=1 00:32:04.481 --rc geninfo_all_blocks=1 00:32:04.481 --rc geninfo_unexecuted_blocks=1 00:32:04.481 00:32:04.481 ' 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.481 --rc genhtml_branch_coverage=1 00:32:04.481 --rc genhtml_function_coverage=1 00:32:04.481 --rc genhtml_legend=1 00:32:04.481 --rc geninfo_all_blocks=1 00:32:04.481 --rc geninfo_unexecuted_blocks=1 00:32:04.481 00:32:04.481 ' 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.481 --rc genhtml_branch_coverage=1 00:32:04.481 --rc genhtml_function_coverage=1 00:32:04.481 --rc genhtml_legend=1 00:32:04.481 --rc geninfo_all_blocks=1 00:32:04.481 --rc geninfo_unexecuted_blocks=1 00:32:04.481 00:32:04.481 ' 00:32:04.481 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.481 --rc genhtml_branch_coverage=1 00:32:04.481 --rc genhtml_function_coverage=1 00:32:04.481 --rc genhtml_legend=1 00:32:04.481 --rc geninfo_all_blocks=1 00:32:04.481 --rc geninfo_unexecuted_blocks=1 00:32:04.481 00:32:04.481 ' 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.743 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:04.744 20:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.889 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:12.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:12.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:12.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:12.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:32:12.890 00:32:12.890 --- 10.0.0.2 ping statistics --- 00:32:12.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.890 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:32:12.890 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:32:12.890 00:32:12.891 --- 10.0.0.1 ping statistics --- 00:32:12.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.891 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3889333 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3889333 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3889333 ']' 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.891 20:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:12.891 [2024-11-26 20:10:12.933046] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.891 [2024-11-26 20:10:12.934206] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:12.891 [2024-11-26 20:10:12.934260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.891 [2024-11-26 20:10:13.034618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.891 [2024-11-26 20:10:13.086182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.891 [2024-11-26 20:10:13.086234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.891 [2024-11-26 20:10:13.086243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.891 [2024-11-26 20:10:13.086250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.891 [2024-11-26 20:10:13.086256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.891 [2024-11-26 20:10:13.087011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.891 [2024-11-26 20:10:13.164114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.891 [2024-11-26 20:10:13.164418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.153 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:13.153 [2024-11-26 20:10:13.951908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.416 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:13.416 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:13.416 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.416 20:10:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.416 ************************************ 00:32:13.416 START TEST lvs_grow_clean 00:32:13.416 ************************************ 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:13.416 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:13.677 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:13.677 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:13.677 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:13.677 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:13.677 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:13.937 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:13.937 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:13.937 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 lvol 150 00:32:14.197 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9af0b11-7684-4640-90de-528b21dccf9c 00:32:14.197 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.197 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:14.197 [2024-11-26 20:10:14.967582] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:14.197 [2024-11-26 20:10:14.967748] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:14.197 true 00:32:14.197 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:14.197 20:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:14.458 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:14.458 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:14.719 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9af0b11-7684-4640-90de-528b21dccf9c 00:32:14.719 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.979 [2024-11-26 20:10:15.692321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.979 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3889910 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3889910 /var/tmp/bdevperf.sock 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3889910 ']' 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:15.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.242 20:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:15.242 [2024-11-26 20:10:15.926250] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:15.242 [2024-11-26 20:10:15.926317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889910 ] 00:32:15.242 [2024-11-26 20:10:16.016533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.503 [2024-11-26 20:10:16.068758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.073 20:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.073 20:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:16.073 20:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:16.334 Nvme0n1 00:32:16.335 20:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:16.595 [ 00:32:16.595 { 00:32:16.595 "name": "Nvme0n1", 00:32:16.595 "aliases": [ 00:32:16.595 "e9af0b11-7684-4640-90de-528b21dccf9c" 00:32:16.595 ], 00:32:16.595 "product_name": "NVMe disk", 00:32:16.595 "block_size": 4096, 00:32:16.595 "num_blocks": 38912, 00:32:16.595 "uuid": "e9af0b11-7684-4640-90de-528b21dccf9c", 00:32:16.595 "numa_id": 0, 00:32:16.595 "assigned_rate_limits": { 00:32:16.595 "rw_ios_per_sec": 0, 00:32:16.595 "rw_mbytes_per_sec": 0, 00:32:16.595 "r_mbytes_per_sec": 0, 00:32:16.595 "w_mbytes_per_sec": 0 00:32:16.595 }, 00:32:16.595 "claimed": false, 00:32:16.595 "zoned": false, 00:32:16.595 "supported_io_types": { 00:32:16.595 "read": true, 00:32:16.595 "write": true, 00:32:16.595 "unmap": true, 00:32:16.595 "flush": true, 00:32:16.595 "reset": true, 00:32:16.595 "nvme_admin": true, 00:32:16.595 "nvme_io": true, 00:32:16.595 "nvme_io_md": false, 00:32:16.595 "write_zeroes": true, 00:32:16.595 "zcopy": false, 00:32:16.595 "get_zone_info": false, 00:32:16.595 "zone_management": false, 00:32:16.595 "zone_append": false, 00:32:16.595 "compare": true, 00:32:16.595 "compare_and_write": true, 00:32:16.595 "abort": true, 00:32:16.595 "seek_hole": false, 00:32:16.595 "seek_data": false, 00:32:16.596 "copy": true, 00:32:16.596 "nvme_iov_md": false 00:32:16.596 }, 00:32:16.596 "memory_domains": [ 00:32:16.596 { 00:32:16.596 "dma_device_id": "system", 00:32:16.596 "dma_device_type": 1 00:32:16.596 } 00:32:16.596 ], 00:32:16.596 "driver_specific": { 00:32:16.596 "nvme": [ 00:32:16.596 { 00:32:16.596 "trid": { 00:32:16.596 "trtype": "TCP", 00:32:16.596 "adrfam": "IPv4", 00:32:16.596 "traddr": "10.0.0.2", 00:32:16.596 "trsvcid": "4420", 00:32:16.596 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:16.596 }, 00:32:16.596 "ctrlr_data": { 00:32:16.596 "cntlid": 1, 00:32:16.596 "vendor_id": "0x8086", 00:32:16.596 "model_number": "SPDK bdev Controller", 00:32:16.596 "serial_number": "SPDK0", 00:32:16.596 "firmware_revision": "25.01", 00:32:16.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:16.596 "oacs": { 00:32:16.596 "security": 0, 00:32:16.596 "format": 0, 00:32:16.596 "firmware": 0, 00:32:16.596 "ns_manage": 0 00:32:16.596 }, 00:32:16.596 "multi_ctrlr": true, 00:32:16.596 "ana_reporting": false 00:32:16.596 }, 00:32:16.596 "vs": { 00:32:16.596 "nvme_version": "1.3" 00:32:16.596 }, 00:32:16.596 "ns_data": { 00:32:16.596 "id": 1, 00:32:16.596 "can_share": true 00:32:16.596 } 00:32:16.596 } 00:32:16.596 ], 00:32:16.596 "mp_policy": "active_passive" 00:32:16.596 } 00:32:16.596 } 00:32:16.596 ] 00:32:16.596 20:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3890249 00:32:16.596 20:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:16.596 20:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:16.596 Running I/O for 10 seconds... 00:32:17.981 Latency(us) 00:32:17.981 [2024-11-26T19:10:18.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.981 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:17.981 [2024-11-26T19:10:18.802Z] =================================================================================================================== 00:32:17.981 [2024-11-26T19:10:18.802Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:32:17.981 00:32:18.553 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:18.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.553 Nvme0n1 : 2.00 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:18.553 [2024-11-26T19:10:19.374Z] =================================================================================================================== 00:32:18.553 [2024-11-26T19:10:19.374Z] Total : 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:32:18.553 00:32:18.813 true 00:32:18.813 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:18.813 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:19.074 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:19.074 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:19.074 20:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3890249 00:32:19.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.645 Nvme0n1 : 3.00 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:32:19.645 [2024-11-26T19:10:20.466Z] =================================================================================================================== 00:32:19.645 [2024-11-26T19:10:20.466Z] Total : 17272.00 67.47 0.00 0.00 0.00 0.00 0.00 00:32:19.645 00:32:20.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.587 Nvme0n1 : 4.00 17462.50 68.21 0.00 0.00 0.00 0.00 0.00 00:32:20.587 [2024-11-26T19:10:21.408Z] =================================================================================================================== 00:32:20.587 [2024-11-26T19:10:21.408Z] Total : 17462.50 68.21 0.00 0.00 0.00 0.00 0.00 00:32:20.587 00:32:21.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.973 Nvme0n1 : 5.00 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:32:21.973 [2024-11-26T19:10:22.794Z] =================================================================================================================== 00:32:21.973 [2024-11-26T19:10:22.794Z] Total : 19024.60 74.31 0.00 0.00 0.00 0.00 0.00 00:32:21.973 00:32:22.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.916 Nvme0n1 : 6.00 20092.83 78.49 0.00 0.00 0.00 0.00 0.00 00:32:22.916 [2024-11-26T19:10:23.737Z] =================================================================================================================== 00:32:22.916 [2024-11-26T19:10:23.737Z] Total : 20092.83 78.49 0.00 0.00 0.00 0.00 0.00 00:32:22.916 00:32:23.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.857 Nvme0n1 : 7.00 20869.14 81.52 0.00 0.00 0.00 0.00 0.00 00:32:23.857 [2024-11-26T19:10:24.678Z] =================================================================================================================== 00:32:23.857 [2024-11-26T19:10:24.678Z] Total : 20869.14 81.52 0.00 0.00 0.00 0.00 0.00 00:32:23.857 00:32:24.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.801 Nvme0n1 : 8.00 21443.50 83.76 0.00 0.00 0.00 0.00 0.00 00:32:24.801 [2024-11-26T19:10:25.622Z] =================================================================================================================== 00:32:24.801 [2024-11-26T19:10:25.622Z] Total : 21443.50 83.76 0.00 0.00 0.00 0.00 0.00 00:32:24.801 00:32:25.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.742 Nvme0n1 : 9.00 21895.56 85.53 0.00 0.00 0.00 0.00 0.00 00:32:25.742 [2024-11-26T19:10:26.563Z] =================================================================================================================== 00:32:25.742 [2024-11-26T19:10:26.563Z] Total : 21895.56 85.53 0.00 0.00 0.00 0.00 0.00 00:32:25.742 00:32:26.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.685 Nvme0n1 : 10.00 22258.70 86.95 0.00 0.00 0.00 0.00 0.00 00:32:26.685 [2024-11-26T19:10:27.506Z] =================================================================================================================== 00:32:26.685 [2024-11-26T19:10:27.506Z] Total : 22258.70 86.95 0.00 0.00 0.00 0.00 0.00 00:32:26.685 00:32:26.685 00:32:26.685 Latency(us) 00:32:26.685 [2024-11-26T19:10:27.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.685 Nvme0n1 : 10.01 22258.92 86.95 0.00 0.00 5747.45 2908.16 32549.55 00:32:26.685 [2024-11-26T19:10:27.506Z] =================================================================================================================== 00:32:26.685 [2024-11-26T19:10:27.506Z] Total : 22258.92 86.95 0.00 0.00 5747.45 2908.16 32549.55 00:32:26.685 { 00:32:26.685 "results": [ 00:32:26.685 { 00:32:26.685 "job": "Nvme0n1", 00:32:26.685 "core_mask": "0x2", 00:32:26.685 "workload": "randwrite", 00:32:26.685 "status": "finished", 00:32:26.685 "queue_depth": 128, 00:32:26.685 "io_size": 4096, 00:32:26.685 "runtime": 10.00565, 00:32:26.685 "iops": 22258.92370810492, 00:32:26.685 "mibps": 86.94892073478485, 00:32:26.685 "io_failed": 0, 00:32:26.685 "io_timeout": 0, 00:32:26.685 "avg_latency_us": 5747.452584214504, 00:32:26.685 "min_latency_us": 2908.16, 00:32:26.685 "max_latency_us": 32549.546666666665 00:32:26.685 } 00:32:26.685 ], 00:32:26.685 "core_count": 1 00:32:26.685 } 00:32:26.685 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3889910 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3889910 ']' 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3889910 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3889910 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3889910' 00:32:26.686 killing process with pid 3889910 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3889910 00:32:26.686 Received shutdown signal, test time was about 10.000000 seconds 00:32:26.686 00:32:26.686 Latency(us) 00:32:26.686 [2024-11-26T19:10:27.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.686 [2024-11-26T19:10:27.507Z] =================================================================================================================== 00:32:26.686 [2024-11-26T19:10:27.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.686 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3889910 00:32:26.947 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:26.947 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.213 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:27.213 20:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:27.474 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:27.474 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:27.474 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:27.474 [2024-11-26 20:10:28.271654] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:27.735 request: 00:32:27.735 { 00:32:27.735 "uuid": "54ccb6a1-a5d7-427c-b8af-f30d05bda570", 00:32:27.735 "method": "bdev_lvol_get_lvstores", 00:32:27.735 "req_id": 1 00:32:27.735 } 00:32:27.735 Got JSON-RPC error response 00:32:27.735 response: 00:32:27.735 { 00:32:27.735 "code": -19, 00:32:27.735 "message": "No such device" 00:32:27.735 } 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:27.735 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:27.996 aio_bdev 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9af0b11-7684-4640-90de-528b21dccf9c 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e9af0b11-7684-4640-90de-528b21dccf9c 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:27.996 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:28.257 20:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9af0b11-7684-4640-90de-528b21dccf9c -t 2000 00:32:28.257 [ 00:32:28.257 { 00:32:28.257 "name": "e9af0b11-7684-4640-90de-528b21dccf9c", 00:32:28.257 "aliases": [ 00:32:28.257 "lvs/lvol" 00:32:28.257 ], 00:32:28.257 "product_name": "Logical Volume", 00:32:28.257 "block_size": 4096, 00:32:28.257 "num_blocks": 38912, 00:32:28.257 "uuid": "e9af0b11-7684-4640-90de-528b21dccf9c", 00:32:28.257 "assigned_rate_limits": { 00:32:28.257 "rw_ios_per_sec": 0, 00:32:28.257 "rw_mbytes_per_sec": 0, 00:32:28.257 "r_mbytes_per_sec": 0, 00:32:28.257 "w_mbytes_per_sec": 0 00:32:28.257 }, 00:32:28.257 "claimed": false, 00:32:28.257 "zoned": false, 00:32:28.257 "supported_io_types": { 00:32:28.257 "read": true, 00:32:28.257 "write": true, 00:32:28.257 "unmap": true, 00:32:28.257 "flush": false, 00:32:28.257 "reset": true, 00:32:28.257 "nvme_admin": false, 00:32:28.257 "nvme_io": false, 00:32:28.257 "nvme_io_md": false, 00:32:28.257 "write_zeroes": true, 00:32:28.257 "zcopy": false, 00:32:28.257 "get_zone_info": false, 00:32:28.257 "zone_management": false, 00:32:28.257 "zone_append": false, 00:32:28.257 "compare": false, 00:32:28.257 "compare_and_write": false, 00:32:28.257 "abort": false, 00:32:28.257 "seek_hole": true, 00:32:28.257 "seek_data": true, 00:32:28.257 "copy": false, 00:32:28.257 "nvme_iov_md": false 00:32:28.257 }, 00:32:28.257 "driver_specific": { 00:32:28.257 "lvol": { 00:32:28.257 "lvol_store_uuid": "54ccb6a1-a5d7-427c-b8af-f30d05bda570", 00:32:28.257 "base_bdev": "aio_bdev", 00:32:28.257 "thin_provision": false, 00:32:28.257 "num_allocated_clusters": 38, 00:32:28.257 "snapshot": false, 00:32:28.257 "clone": false, 00:32:28.257 "esnap_clone": false 00:32:28.257 } 00:32:28.257 } 00:32:28.257 } 00:32:28.257 ] 00:32:28.257 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:28.257 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:28.257 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:28.518 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:28.519 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:28.519 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:28.784 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:28.784 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9af0b11-7684-4640-90de-528b21dccf9c 00:32:28.784 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54ccb6a1-a5d7-427c-b8af-f30d05bda570 00:32:29.085 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:29.348 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:29.348 00:32:29.348 real 0m15.944s 00:32:29.348 user 0m15.634s 00:32:29.348 sys 0m1.459s 00:32:29.348 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.348 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:29.348 ************************************ 00:32:29.348 END TEST lvs_grow_clean 00:32:29.348 ************************************ 00:32:29.348 20:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:29.348 ************************************ 00:32:29.348 START TEST lvs_grow_dirty 00:32:29.348 ************************************ 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:29.348 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:29.609 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:29.609 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:29.869 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6711d9af-c393-4022-b462-95e38e8eff1f 00:32:29.870 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:29.870 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:29.870 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:29.870 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:29.870 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6711d9af-c393-4022-b462-95e38e8eff1f lvol 150 00:32:30.130 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:30.130 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.130 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:30.392 [2024-11-26 20:10:30.951581] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:30.392 [2024-11-26 20:10:30.951753] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:30.392 true 00:32:30.392 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:30.392 20:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:30.392 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:30.392 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:30.653 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:30.914 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:30.914 [2024-11-26 20:10:31.676137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.914 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3892987 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3892987 /var/tmp/bdevperf.sock 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3892987 ']' 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.175 20:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.175 [2024-11-26 20:10:31.898185] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:31.175 [2024-11-26 20:10:31.898243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892987 ] 00:32:31.175 [2024-11-26 20:10:31.980286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.435 [2024-11-26 20:10:32.010485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.006 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.006 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:32.006 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:32.267 Nvme0n1 00:32:32.267 20:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:32.267 [ 00:32:32.267 { 00:32:32.267 "name": "Nvme0n1", 00:32:32.267 "aliases": [ 00:32:32.267 "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7" 00:32:32.267 ], 00:32:32.267 "product_name": "NVMe disk", 00:32:32.267 "block_size": 4096, 00:32:32.267 "num_blocks": 38912, 00:32:32.267 "uuid": "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7", 00:32:32.267 "numa_id": 0, 00:32:32.267 "assigned_rate_limits": { 00:32:32.267 "rw_ios_per_sec": 0, 00:32:32.267 "rw_mbytes_per_sec": 0, 00:32:32.267 "r_mbytes_per_sec": 0, 00:32:32.267 "w_mbytes_per_sec": 0 00:32:32.267 }, 00:32:32.267 "claimed": false, 00:32:32.267 "zoned": false, 00:32:32.267 "supported_io_types": { 00:32:32.267 "read": true, 00:32:32.267 "write": true, 00:32:32.267 "unmap": true, 00:32:32.267 "flush": true, 00:32:32.267 "reset": true, 00:32:32.267 "nvme_admin": true, 00:32:32.267 "nvme_io": true, 00:32:32.267 "nvme_io_md": false, 00:32:32.267 "write_zeroes": true, 00:32:32.267 "zcopy": false, 00:32:32.267 "get_zone_info": false, 00:32:32.267 "zone_management": false, 00:32:32.267 "zone_append": false, 00:32:32.267 "compare": true, 00:32:32.267 "compare_and_write": true, 00:32:32.267 "abort": true, 00:32:32.267 "seek_hole": false, 00:32:32.267 "seek_data": false, 00:32:32.267 "copy": true, 00:32:32.267 "nvme_iov_md": false 00:32:32.267 }, 00:32:32.267 "memory_domains": [ 00:32:32.267 { 00:32:32.267 "dma_device_id": "system", 00:32:32.268 "dma_device_type": 1 00:32:32.268 } 00:32:32.268 ], 00:32:32.268 "driver_specific": { 00:32:32.268 "nvme": [ 00:32:32.268 { 00:32:32.268 "trid": { 00:32:32.268 "trtype": "TCP", 00:32:32.268 "adrfam": "IPv4", 00:32:32.268 "traddr": "10.0.0.2", 00:32:32.268 "trsvcid": "4420", 00:32:32.268 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:32.268 }, 00:32:32.268 "ctrlr_data": { 00:32:32.268 "cntlid": 1, 00:32:32.268 "vendor_id": "0x8086", 00:32:32.268 "model_number": "SPDK bdev Controller", 00:32:32.268 "serial_number": "SPDK0", 00:32:32.268 "firmware_revision": "25.01", 00:32:32.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.268 "oacs": { 00:32:32.268 "security": 0, 00:32:32.268 "format": 0, 00:32:32.268 "firmware": 0, 00:32:32.268 "ns_manage": 0 00:32:32.268 }, 00:32:32.268 "multi_ctrlr": true, 00:32:32.268 "ana_reporting": false 00:32:32.268 }, 00:32:32.268 "vs": { 00:32:32.268 "nvme_version": "1.3" 00:32:32.268 }, 00:32:32.268 "ns_data": { 00:32:32.268 "id": 1, 00:32:32.268 "can_share": true 00:32:32.268 } 00:32:32.268 } 00:32:32.268 ], 00:32:32.268 "mp_policy": "active_passive" 00:32:32.268 } 00:32:32.268 } 00:32:32.268 ] 00:32:32.268 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3893287 00:32:32.268 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:32.268 20:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:32.529 Running I/O for 10 seconds... 00:32:33.471 Latency(us) 00:32:33.471 [2024-11-26T19:10:34.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.471 Nvme0n1 : 1.00 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:33.471 [2024-11-26T19:10:34.292Z] =================================================================================================================== 00:32:33.471 [2024-11-26T19:10:34.292Z] Total : 17399.00 67.96 0.00 0.00 0.00 0.00 0.00 00:32:33.471 00:32:34.413 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:34.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.414 Nvme0n1 : 2.00 17702.00 69.15 0.00 0.00 0.00 0.00 0.00 00:32:34.414 [2024-11-26T19:10:35.235Z] =================================================================================================================== 00:32:34.414 [2024-11-26T19:10:35.235Z] Total : 17702.00 69.15 0.00 0.00 0.00 0.00 0.00 00:32:34.414 00:32:34.675 true 00:32:34.675 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:34.675 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:34.675 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:34.675 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:34.675 20:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3893287 00:32:35.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.616 Nvme0n1 : 3.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:35.616 [2024-11-26T19:10:36.437Z] =================================================================================================================== 00:32:35.616 [2024-11-26T19:10:36.437Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:35.616 00:32:36.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.558 Nvme0n1 : 4.00 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:32:36.558 [2024-11-26T19:10:37.379Z] =================================================================================================================== 00:32:36.558 [2024-11-26T19:10:37.379Z] Total : 17843.50 69.70 0.00 0.00 0.00 0.00 0.00 00:32:36.558 00:32:37.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.504 Nvme0n1 : 5.00 18897.60 73.82 0.00 0.00 0.00 0.00 0.00 00:32:37.504 [2024-11-26T19:10:38.325Z] =================================================================================================================== 00:32:37.504 [2024-11-26T19:10:38.325Z] Total : 18897.60 73.82 0.00 0.00 0.00 0.00 0.00 00:32:37.504 00:32:38.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.447 Nvme0n1 : 6.00 19981.33 78.05 0.00 0.00 0.00 0.00 0.00 00:32:38.447 [2024-11-26T19:10:39.268Z] =================================================================================================================== 00:32:38.447 [2024-11-26T19:10:39.268Z] Total : 19981.33 78.05 0.00 0.00 0.00 0.00 0.00 00:32:38.447 00:32:39.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.389 Nvme0n1 : 7.00 20721.57 80.94 0.00 0.00 0.00 0.00 0.00 00:32:39.389 [2024-11-26T19:10:40.210Z] =================================================================================================================== 00:32:39.389 [2024-11-26T19:10:40.210Z] Total : 20721.57 80.94 0.00 0.00 0.00 0.00 0.00 00:32:39.389 00:32:40.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.784 Nvme0n1 : 8.00 21314.38 83.26 0.00 0.00 0.00 0.00 0.00 00:32:40.784 [2024-11-26T19:10:41.605Z] =================================================================================================================== 00:32:40.784 [2024-11-26T19:10:41.605Z] Total : 21314.38 83.26 0.00 0.00 0.00 0.00 0.00 00:32:40.784 00:32:41.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.725 Nvme0n1 : 9.00 21780.78 85.08 0.00 0.00 0.00 0.00 0.00 00:32:41.725 [2024-11-26T19:10:42.546Z] =================================================================================================================== 00:32:41.725 [2024-11-26T19:10:42.546Z] Total : 21780.78 85.08 0.00 0.00 0.00 0.00 0.00 00:32:41.725 00:32:42.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.667 Nvme0n1 : 10.00 22142.70 86.49 0.00 0.00 0.00 0.00 0.00 00:32:42.667 [2024-11-26T19:10:43.488Z] =================================================================================================================== 00:32:42.667 [2024-11-26T19:10:43.488Z] Total : 22142.70 86.49 0.00 0.00 0.00 0.00 0.00 00:32:42.667 00:32:42.667 00:32:42.667 Latency(us) 00:32:42.667 [2024-11-26T19:10:43.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.667 Nvme0n1 : 10.00 22146.11 86.51 0.00 0.00 5777.13 3003.73 31238.83 00:32:42.667 [2024-11-26T19:10:43.488Z] =================================================================================================================== 00:32:42.667 [2024-11-26T19:10:43.488Z] Total : 22146.11 86.51 0.00 0.00 5777.13 3003.73 31238.83 00:32:42.667 { 00:32:42.667 "results": [ 00:32:42.667 { 00:32:42.667 "job": "Nvme0n1", 00:32:42.667 "core_mask": "0x2", 00:32:42.667 "workload": "randwrite", 00:32:42.667 "status": "finished", 00:32:42.667 "queue_depth": 128, 00:32:42.667 "io_size": 4096, 00:32:42.667 "runtime": 10.004239, 00:32:42.667 "iops": 22146.11226301171, 00:32:42.667 "mibps": 86.50825102738949, 00:32:42.667 "io_failed": 0, 00:32:42.667 "io_timeout": 0, 00:32:42.667 "avg_latency_us": 5777.127097710877, 00:32:42.667 "min_latency_us": 3003.733333333333, 00:32:42.667 "max_latency_us": 31238.826666666668 00:32:42.667 } 00:32:42.667 ], 00:32:42.667 "core_count": 1 00:32:42.667 } 00:32:42.667 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3892987 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3892987 ']' 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3892987 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892987 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892987' 00:32:42.668 killing process with pid 3892987 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3892987 00:32:42.668 Received shutdown signal, test time was about 10.000000 seconds 00:32:42.668 00:32:42.668 Latency(us) 00:32:42.668 [2024-11-26T19:10:43.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.668 [2024-11-26T19:10:43.489Z] =================================================================================================================== 00:32:42.668 [2024-11-26T19:10:43.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3892987 00:32:42.668 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:42.928 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.928 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:42.928 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3889333 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3889333 00:32:43.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3889333 Killed "${NVMF_APP[@]}" "$@" 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3895337 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3895337 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3895337 ']' 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.188 20:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:43.188 [2024-11-26 20:10:43.999279] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.188 [2024-11-26 20:10:44.000300] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:43.188 [2024-11-26 20:10:44.000348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.450 [2024-11-26 20:10:44.090452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.450 [2024-11-26 20:10:44.122018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.450 [2024-11-26 20:10:44.122046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.450 [2024-11-26 20:10:44.122053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.450 [2024-11-26 20:10:44.122057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.450 [2024-11-26 20:10:44.122062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.450 [2024-11-26 20:10:44.122517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.450 [2024-11-26 20:10:44.174889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.450 [2024-11-26 20:10:44.175078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.023 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.023 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:44.023 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.023 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.023 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:44.285 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.285 20:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:44.285 [2024-11-26 20:10:45.008690] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:44.285 [2024-11-26 20:10:45.008922] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:44.285 [2024-11-26 20:10:45.009014] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:44.285 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:44.548 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 -t 2000 00:32:44.548 [ 00:32:44.548 { 00:32:44.548 "name": "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7", 00:32:44.548 "aliases": [ 00:32:44.548 "lvs/lvol" 00:32:44.548 ], 00:32:44.548 "product_name": "Logical Volume", 00:32:44.548 "block_size": 4096, 00:32:44.548 "num_blocks": 38912, 00:32:44.548 "uuid": "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7", 00:32:44.548 "assigned_rate_limits": { 00:32:44.548 "rw_ios_per_sec": 0, 00:32:44.548 "rw_mbytes_per_sec": 0, 00:32:44.548 "r_mbytes_per_sec": 0, 00:32:44.548 "w_mbytes_per_sec": 0 00:32:44.548 }, 00:32:44.548 "claimed": false, 00:32:44.548 "zoned": false, 00:32:44.548 "supported_io_types": { 00:32:44.548 "read": true, 00:32:44.548 "write": true, 00:32:44.548 "unmap": true, 00:32:44.548 "flush": false, 00:32:44.548 "reset": true, 00:32:44.548 "nvme_admin": false, 00:32:44.548 "nvme_io": false, 00:32:44.548 "nvme_io_md": false, 00:32:44.548 "write_zeroes": true, 00:32:44.548 "zcopy": false, 00:32:44.548 "get_zone_info": false, 00:32:44.548 "zone_management": false, 00:32:44.548 "zone_append": false, 00:32:44.548 "compare": false, 00:32:44.548 "compare_and_write": false, 00:32:44.548 "abort": false, 00:32:44.548 "seek_hole": true, 00:32:44.548 "seek_data": true, 00:32:44.548 "copy": false, 00:32:44.548 "nvme_iov_md": false 00:32:44.548 }, 00:32:44.548 "driver_specific": { 00:32:44.548 "lvol": { 00:32:44.548 "lvol_store_uuid": "6711d9af-c393-4022-b462-95e38e8eff1f", 00:32:44.548 "base_bdev": "aio_bdev", 00:32:44.548 "thin_provision": false, 00:32:44.548 "num_allocated_clusters": 38, 00:32:44.548 "snapshot": false, 00:32:44.548 "clone": false, 00:32:44.548 "esnap_clone": false 00:32:44.548 } 00:32:44.548 } 00:32:44.548 } 00:32:44.548 ] 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:44.809 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:45.069 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:45.069 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:45.069 [2024-11-26 20:10:45.863002] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.329 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:45.330 20:10:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:45.330 request: 00:32:45.330 { 00:32:45.330 "uuid": "6711d9af-c393-4022-b462-95e38e8eff1f", 00:32:45.330 "method": "bdev_lvol_get_lvstores", 00:32:45.330 "req_id": 1 00:32:45.330 } 00:32:45.330 Got JSON-RPC error response 00:32:45.330 response: 00:32:45.330 { 00:32:45.330 "code": -19, 00:32:45.330 "message": "No such device" 00:32:45.330 } 00:32:45.330 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:45.330 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:45.330 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:45.330 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:45.330 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:45.591 aio_bdev 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:45.591 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:45.851 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 -t 2000 00:32:45.851 [ 00:32:45.851 { 00:32:45.851 "name": "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7", 00:32:45.851 "aliases": [ 00:32:45.851 "lvs/lvol" 00:32:45.851 ], 00:32:45.851 "product_name": "Logical Volume", 00:32:45.851 "block_size": 4096, 00:32:45.851 "num_blocks": 38912, 00:32:45.851 "uuid": "fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7", 00:32:45.851 "assigned_rate_limits": { 00:32:45.851 "rw_ios_per_sec": 0, 00:32:45.851 "rw_mbytes_per_sec": 0, 00:32:45.851 "r_mbytes_per_sec": 0, 00:32:45.851 "w_mbytes_per_sec": 0 00:32:45.851 }, 00:32:45.851 "claimed": false, 00:32:45.851 "zoned": false, 00:32:45.851 "supported_io_types": { 00:32:45.851 "read": true, 00:32:45.851 "write": true, 00:32:45.851 "unmap": true, 00:32:45.851 "flush": false, 00:32:45.851 "reset": true, 00:32:45.851 "nvme_admin": false, 00:32:45.851 "nvme_io": false, 00:32:45.851 "nvme_io_md": false, 00:32:45.851 "write_zeroes": true, 00:32:45.851 "zcopy": false, 00:32:45.851 "get_zone_info": false, 00:32:45.851 "zone_management": false, 00:32:45.851 "zone_append": false, 00:32:45.851 "compare": false, 00:32:45.851 "compare_and_write": false, 00:32:45.851 "abort": false, 00:32:45.851 "seek_hole": true, 00:32:45.851 "seek_data": true, 00:32:45.851 "copy": false, 00:32:45.851 "nvme_iov_md": false 00:32:45.851 }, 00:32:45.851 "driver_specific": { 00:32:45.851 "lvol": { 00:32:45.851 "lvol_store_uuid": "6711d9af-c393-4022-b462-95e38e8eff1f", 00:32:45.851 "base_bdev": "aio_bdev", 00:32:45.851 "thin_provision": false, 00:32:45.851 "num_allocated_clusters": 38, 00:32:45.851 "snapshot": false, 00:32:45.851 "clone": false, 00:32:45.851 "esnap_clone": false 00:32:45.851 } 00:32:45.851 } 00:32:45.851 } 00:32:45.851 ] 00:32:45.851 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:45.851 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:45.851 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:46.113 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:46.113 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:46.113 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:46.373 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:46.373 20:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb0c68b5-2aa7-404a-b7b8-5a17661ba3e7 00:32:46.373 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6711d9af-c393-4022-b462-95e38e8eff1f 00:32:46.633 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:46.894 00:32:46.894 real 0m17.482s 00:32:46.894 user 0m35.493s 00:32:46.894 sys 0m2.987s 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:46.894 ************************************ 00:32:46.894 END TEST lvs_grow_dirty 00:32:46.894 ************************************ 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:46.894 nvmf_trace.0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.894 rmmod nvme_tcp 00:32:46.894 rmmod nvme_fabrics 00:32:46.894 rmmod nvme_keyring 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3895337 ']' 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3895337 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3895337 ']' 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3895337 00:32:46.894 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:46.895 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.895 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895337 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895337' 00:32:47.156 killing process with pid 3895337 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3895337 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3895337 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.156 20:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.703 20:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.703 00:32:49.703 real 0m44.878s 00:32:49.703 user 0m54.049s 00:32:49.703 sys 0m10.709s 00:32:49.703 20:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.703 20:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:49.703 ************************************ 00:32:49.703 END TEST nvmf_lvs_grow 00:32:49.703 ************************************ 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.703 ************************************ 00:32:49.703 START TEST nvmf_bdev_io_wait 00:32:49.703 ************************************ 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:49.703 * Looking for test storage... 00:32:49.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.703 --rc genhtml_branch_coverage=1 00:32:49.703 --rc genhtml_function_coverage=1 00:32:49.703 --rc genhtml_legend=1 00:32:49.703 --rc geninfo_all_blocks=1 00:32:49.703 --rc geninfo_unexecuted_blocks=1 00:32:49.703 00:32:49.703 ' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.703 --rc genhtml_branch_coverage=1 00:32:49.703 --rc genhtml_function_coverage=1 00:32:49.703 --rc genhtml_legend=1 00:32:49.703 --rc geninfo_all_blocks=1 00:32:49.703 --rc geninfo_unexecuted_blocks=1 00:32:49.703 00:32:49.703 ' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.703 --rc genhtml_branch_coverage=1 00:32:49.703 --rc genhtml_function_coverage=1 00:32:49.703 --rc genhtml_legend=1 00:32:49.703 --rc geninfo_all_blocks=1 00:32:49.703 --rc geninfo_unexecuted_blocks=1 00:32:49.703 00:32:49.703 ' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.703 --rc genhtml_branch_coverage=1 00:32:49.703 --rc genhtml_function_coverage=1 00:32:49.703 --rc genhtml_legend=1 00:32:49.703 --rc geninfo_all_blocks=1 00:32:49.703 --rc geninfo_unexecuted_blocks=1 00:32:49.703 00:32:49.703 ' 00:32:49.703 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.704 20:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.477 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:56.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:56.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.738 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:56.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:56.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.739 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:32:57.001 00:32:57.001 --- 10.0.0.2 ping statistics --- 00:32:57.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.001 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:32:57.001 00:32:57.001 --- 10.0.0.1 ping statistics --- 00:32:57.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.001 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3900111 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3900111 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3900111 ']' 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.001 20:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.001 [2024-11-26 20:10:57.700979] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.001 [2024-11-26 20:10:57.702096] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:57.001 [2024-11-26 20:10:57.702147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.001 [2024-11-26 20:10:57.802610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.263 [2024-11-26 20:10:57.858189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.263 [2024-11-26 20:10:57.858242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.263 [2024-11-26 20:10:57.858251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.263 [2024-11-26 20:10:57.858258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.264 [2024-11-26 20:10:57.858264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.264 [2024-11-26 20:10:57.860567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.264 [2024-11-26 20:10:57.860729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.264 [2024-11-26 20:10:57.860890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.264 [2024-11-26 20:10:57.860890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:57.264 [2024-11-26 20:10:57.861251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.835 [2024-11-26 20:10:58.596194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.835 [2024-11-26 20:10:58.596535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:57.835 [2024-11-26 20:10:58.597157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:57.835 [2024-11-26 20:10:58.597289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.835 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.836 [2024-11-26 20:10:58.609748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:57.836 Malloc0 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.836 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.099 [2024-11-26 20:10:58.677997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3900428 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3900430 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.099 { 00:32:58.099 "params": { 00:32:58.099 "name": "Nvme$subsystem", 00:32:58.099 "trtype": "$TEST_TRANSPORT", 00:32:58.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.099 "adrfam": "ipv4", 00:32:58.099 "trsvcid": "$NVMF_PORT", 00:32:58.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.099 "hdgst": ${hdgst:-false}, 00:32:58.099 "ddgst": ${ddgst:-false} 00:32:58.099 }, 00:32:58.099 "method": "bdev_nvme_attach_controller" 00:32:58.099 } 00:32:58.099 EOF 00:32:58.099 )") 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3900432 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3900435 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.099 { 00:32:58.099 "params": { 00:32:58.099 "name": "Nvme$subsystem", 00:32:58.099 "trtype": "$TEST_TRANSPORT", 00:32:58.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.099 "adrfam": "ipv4", 00:32:58.099 "trsvcid": "$NVMF_PORT", 00:32:58.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.099 "hdgst": ${hdgst:-false}, 00:32:58.099 "ddgst": ${ddgst:-false} 00:32:58.099 }, 00:32:58.099 "method": "bdev_nvme_attach_controller" 00:32:58.099 } 00:32:58.099 EOF 00:32:58.099 )") 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.099 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.099 { 00:32:58.099 "params": { 00:32:58.099 "name": "Nvme$subsystem", 00:32:58.100 "trtype": "$TEST_TRANSPORT", 00:32:58.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "$NVMF_PORT", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.100 "hdgst": ${hdgst:-false}, 00:32:58.100 "ddgst": ${ddgst:-false} 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 } 00:32:58.100 EOF 00:32:58.100 )") 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:58.100 { 00:32:58.100 "params": { 00:32:58.100 "name": "Nvme$subsystem", 00:32:58.100 "trtype": "$TEST_TRANSPORT", 00:32:58.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "$NVMF_PORT", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.100 "hdgst": ${hdgst:-false}, 00:32:58.100 "ddgst": ${ddgst:-false} 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 } 00:32:58.100 EOF 00:32:58.100 )") 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3900428 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.100 "params": { 00:32:58.100 "name": "Nvme1", 00:32:58.100 "trtype": "tcp", 00:32:58.100 "traddr": "10.0.0.2", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "4420", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.100 "hdgst": false, 00:32:58.100 "ddgst": false 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 }' 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.100 "params": { 00:32:58.100 "name": "Nvme1", 00:32:58.100 "trtype": "tcp", 00:32:58.100 "traddr": "10.0.0.2", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "4420", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.100 "hdgst": false, 00:32:58.100 "ddgst": false 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 }' 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.100 "params": { 00:32:58.100 "name": "Nvme1", 00:32:58.100 "trtype": "tcp", 00:32:58.100 "traddr": "10.0.0.2", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "4420", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.100 "hdgst": false, 00:32:58.100 "ddgst": false 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 }' 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:58.100 20:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:58.100 "params": { 00:32:58.100 "name": "Nvme1", 00:32:58.100 "trtype": "tcp", 00:32:58.100 "traddr": "10.0.0.2", 00:32:58.100 "adrfam": "ipv4", 00:32:58.100 "trsvcid": "4420", 00:32:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.100 "hdgst": false, 00:32:58.100 "ddgst": false 00:32:58.100 }, 00:32:58.100 "method": "bdev_nvme_attach_controller" 00:32:58.100 }' 00:32:58.100 [2024-11-26 20:10:58.732951] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:58.100 [2024-11-26 20:10:58.733006] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:58.100 [2024-11-26 20:10:58.733842] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:58.100 [2024-11-26 20:10:58.733893] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:58.100 [2024-11-26 20:10:58.734333] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:58.100 [2024-11-26 20:10:58.734380] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:58.100 [2024-11-26 20:10:58.736072] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:32:58.100 [2024-11-26 20:10:58.736119] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:58.100 [2024-11-26 20:10:58.891904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.361 [2024-11-26 20:10:58.922219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:58.361 [2024-11-26 20:10:58.940297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.361 [2024-11-26 20:10:58.969761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:58.361 [2024-11-26 20:10:58.984319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.361 [2024-11-26 20:10:59.013232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:58.361 [2024-11-26 20:10:59.034343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.361 [2024-11-26 20:10:59.062868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:58.361 Running I/O for 1 seconds... 00:32:58.621 Running I/O for 1 seconds... 00:32:58.621 Running I/O for 1 seconds... 00:32:58.621 Running I/O for 1 seconds... 00:32:59.565 7753.00 IOPS, 30.29 MiB/s 00:32:59.565 Latency(us) 00:32:59.565 [2024-11-26T19:11:00.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.565 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:59.565 Nvme1n1 : 1.02 7770.56 30.35 0.00 0.00 16357.47 2034.35 26651.31 00:32:59.565 [2024-11-26T19:11:00.386Z] =================================================================================================================== 00:32:59.565 [2024-11-26T19:11:00.386Z] Total : 7770.56 30.35 0.00 0.00 16357.47 2034.35 26651.31 00:32:59.565 14597.00 IOPS, 57.02 MiB/s 00:32:59.565 Latency(us) 00:32:59.565 [2024-11-26T19:11:00.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.565 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:59.565 Nvme1n1 : 1.01 14658.06 57.26 0.00 0.00 8706.41 2389.33 13489.49 00:32:59.565 [2024-11-26T19:11:00.386Z] =================================================================================================================== 00:32:59.565 [2024-11-26T19:11:00.386Z] Total : 14658.06 57.26 0.00 0.00 8706.41 2389.33 13489.49 00:32:59.565 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3900430 00:32:59.565 7658.00 IOPS, 29.91 MiB/s 00:32:59.565 Latency(us) 00:32:59.565 [2024-11-26T19:11:00.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.565 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:59.565 Nvme1n1 : 1.01 7749.39 30.27 0.00 0.00 16473.97 3932.16 36481.71 00:32:59.565 [2024-11-26T19:11:00.386Z] =================================================================================================================== 00:32:59.565 [2024-11-26T19:11:00.386Z] Total : 7749.39 30.27 0.00 0.00 16473.97 3932.16 36481.71 00:32:59.565 181200.00 IOPS, 707.81 MiB/s 00:32:59.565 Latency(us) 00:32:59.565 [2024-11-26T19:11:00.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.565 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:59.565 Nvme1n1 : 1.00 180843.60 706.42 0.00 0.00 704.34 296.96 1979.73 00:32:59.565 [2024-11-26T19:11:00.386Z] =================================================================================================================== 00:32:59.565 [2024-11-26T19:11:00.386Z] Total : 180843.60 706.42 0.00 0.00 704.34 296.96 1979.73 00:32:59.565 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3900432 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3900435 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.827 rmmod nvme_tcp 00:32:59.827 rmmod nvme_fabrics 00:32:59.827 rmmod nvme_keyring 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3900111 ']' 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3900111 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3900111 ']' 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3900111 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900111 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900111' 00:32:59.827 killing process with pid 3900111 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3900111 00:32:59.827 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3900111 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.088 20:11:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.001 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.001 00:33:02.001 real 0m12.728s 00:33:02.001 user 0m15.240s 00:33:02.001 sys 0m7.286s 00:33:02.001 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.001 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.001 ************************************ 00:33:02.001 END TEST nvmf_bdev_io_wait 00:33:02.001 ************************************ 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.263 ************************************ 00:33:02.263 START TEST nvmf_queue_depth 00:33:02.263 ************************************ 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:02.263 * Looking for test storage... 00:33:02.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.263 20:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.263 --rc genhtml_branch_coverage=1 00:33:02.263 --rc genhtml_function_coverage=1 00:33:02.263 --rc genhtml_legend=1 00:33:02.263 --rc geninfo_all_blocks=1 00:33:02.263 --rc geninfo_unexecuted_blocks=1 00:33:02.263 00:33:02.263 ' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.263 --rc genhtml_branch_coverage=1 00:33:02.263 --rc genhtml_function_coverage=1 00:33:02.263 --rc genhtml_legend=1 00:33:02.263 --rc geninfo_all_blocks=1 00:33:02.263 --rc geninfo_unexecuted_blocks=1 00:33:02.263 00:33:02.263 ' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.263 --rc genhtml_branch_coverage=1 00:33:02.263 --rc genhtml_function_coverage=1 00:33:02.263 --rc genhtml_legend=1 00:33:02.263 --rc geninfo_all_blocks=1 00:33:02.263 --rc geninfo_unexecuted_blocks=1 00:33:02.263 00:33:02.263 ' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.263 --rc genhtml_branch_coverage=1 00:33:02.263 --rc genhtml_function_coverage=1 00:33:02.263 --rc genhtml_legend=1 00:33:02.263 --rc geninfo_all_blocks=1 00:33:02.263 --rc geninfo_unexecuted_blocks=1 00:33:02.263 00:33:02.263 ' 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.263 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.525 20:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.656 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:10.657 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:10.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:10.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:10.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:33:10.657 00:33:10.657 --- 10.0.0.2 ping statistics --- 00:33:10.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.657 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:33:10.657 00:33:10.657 --- 10.0.0.1 ping statistics --- 00:33:10.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.657 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.657 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3904875 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3904875 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3904875 ']' 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 20:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:10.658 [2024-11-26 20:11:10.411908] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.658 [2024-11-26 20:11:10.412860] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:33:10.658 [2024-11-26 20:11:10.412898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.658 [2024-11-26 20:11:10.507498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.658 [2024-11-26 20:11:10.543601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.658 [2024-11-26 20:11:10.543632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.658 [2024-11-26 20:11:10.543640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.658 [2024-11-26 20:11:10.543646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.658 [2024-11-26 20:11:10.543652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.658 [2024-11-26 20:11:10.544200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.658 [2024-11-26 20:11:10.599832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.658 [2024-11-26 20:11:10.600083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 [2024-11-26 20:11:11.252977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 Malloc0 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 [2024-11-26 20:11:11.329151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3905143 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3905143 /var/tmp/bdevperf.sock 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3905143 ']' 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.658 20:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:10.658 [2024-11-26 20:11:11.388369] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:33:10.658 [2024-11-26 20:11:11.388443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905143 ] 00:33:10.918 [2024-11-26 20:11:11.479541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.918 [2024-11-26 20:11:11.532042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.488 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.488 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:11.488 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:11.488 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.488 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:11.748 NVMe0n1 00:33:11.748 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.748 20:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.748 Running I/O for 10 seconds... 00:33:14.074 8367.00 IOPS, 32.68 MiB/s [2024-11-26T19:11:15.833Z] 8709.00 IOPS, 34.02 MiB/s [2024-11-26T19:11:16.773Z] 9558.33 IOPS, 37.34 MiB/s [2024-11-26T19:11:17.711Z] 10498.25 IOPS, 41.01 MiB/s [2024-11-26T19:11:18.649Z] 11067.80 IOPS, 43.23 MiB/s [2024-11-26T19:11:19.589Z] 11443.00 IOPS, 44.70 MiB/s [2024-11-26T19:11:20.530Z] 11727.43 IOPS, 45.81 MiB/s [2024-11-26T19:11:21.914Z] 11917.88 IOPS, 46.55 MiB/s [2024-11-26T19:11:22.855Z] 12091.89 IOPS, 47.23 MiB/s [2024-11-26T19:11:22.855Z] 12230.90 IOPS, 47.78 MiB/s 00:33:22.034 Latency(us) 00:33:22.034 [2024-11-26T19:11:22.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.034 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:22.034 Verification LBA range: start 0x0 length 0x4000 00:33:22.034 NVMe0n1 : 10.06 12265.42 47.91 0.00 0.00 83171.81 14964.05 75147.95 00:33:22.034 [2024-11-26T19:11:22.855Z] =================================================================================================================== 00:33:22.034 [2024-11-26T19:11:22.855Z] Total : 12265.42 47.91 0.00 0.00 83171.81 14964.05 75147.95 00:33:22.034 { 00:33:22.034 "results": [ 00:33:22.034 { 00:33:22.034 "job": "NVMe0n1", 00:33:22.034 "core_mask": "0x1", 00:33:22.034 "workload": "verify", 00:33:22.034 "status": "finished", 00:33:22.034 "verify_range": { 00:33:22.034 "start": 0, 00:33:22.034 "length": 16384 00:33:22.034 }, 00:33:22.034 "queue_depth": 1024, 00:33:22.034 "io_size": 4096, 00:33:22.034 "runtime": 10.055341, 00:33:22.035 "iops": 12265.421928505459, 00:33:22.035 "mibps": 47.91180440822445, 00:33:22.035 "io_failed": 0, 00:33:22.035 "io_timeout": 0, 00:33:22.035 "avg_latency_us": 83171.81404706498, 00:33:22.035 "min_latency_us": 14964.053333333333, 00:33:22.035 "max_latency_us": 75147.94666666667 00:33:22.035 } 00:33:22.035 ], 00:33:22.035 "core_count": 1 00:33:22.035 } 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3905143 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3905143 ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3905143 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3905143 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3905143' 00:33:22.035 killing process with pid 3905143 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3905143 00:33:22.035 Received shutdown signal, test time was about 10.000000 seconds 00:33:22.035 00:33:22.035 Latency(us) 00:33:22.035 [2024-11-26T19:11:22.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.035 [2024-11-26T19:11:22.856Z] =================================================================================================================== 00:33:22.035 [2024-11-26T19:11:22.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3905143 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.035 rmmod nvme_tcp 00:33:22.035 rmmod nvme_fabrics 00:33:22.035 rmmod nvme_keyring 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3904875 ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3904875 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3904875 ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3904875 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.035 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3904875 00:33:22.297 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.297 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.297 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3904875' 00:33:22.297 killing process with pid 3904875 00:33:22.297 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3904875 00:33:22.297 20:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3904875 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.297 20:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.842 00:33:24.842 real 0m22.237s 00:33:24.842 user 0m24.675s 00:33:24.842 sys 0m7.213s 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:24.842 ************************************ 00:33:24.842 END TEST nvmf_queue_depth 00:33:24.842 ************************************ 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:24.842 ************************************ 00:33:24.842 START TEST nvmf_target_multipath 00:33:24.842 ************************************ 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:24.842 * Looking for test storage... 00:33:24.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.842 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.843 --rc genhtml_branch_coverage=1 00:33:24.843 --rc genhtml_function_coverage=1 00:33:24.843 --rc genhtml_legend=1 00:33:24.843 --rc geninfo_all_blocks=1 00:33:24.843 --rc geninfo_unexecuted_blocks=1 00:33:24.843 00:33:24.843 ' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.843 --rc genhtml_branch_coverage=1 00:33:24.843 --rc genhtml_function_coverage=1 00:33:24.843 --rc genhtml_legend=1 00:33:24.843 --rc geninfo_all_blocks=1 00:33:24.843 --rc geninfo_unexecuted_blocks=1 00:33:24.843 00:33:24.843 ' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.843 --rc genhtml_branch_coverage=1 00:33:24.843 --rc genhtml_function_coverage=1 00:33:24.843 --rc genhtml_legend=1 00:33:24.843 --rc geninfo_all_blocks=1 00:33:24.843 --rc geninfo_unexecuted_blocks=1 00:33:24.843 00:33:24.843 ' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:24.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.843 --rc genhtml_branch_coverage=1 00:33:24.843 --rc genhtml_function_coverage=1 00:33:24.843 --rc genhtml_legend=1 00:33:24.843 --rc geninfo_all_blocks=1 00:33:24.843 --rc geninfo_unexecuted_blocks=1 00:33:24.843 00:33:24.843 ' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:24.843 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.844 20:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:32.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:32.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:32.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.984 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:32.985 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:32.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:33:32.985 00:33:32.985 --- 10.0.0.2 ping statistics --- 00:33:32.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.985 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:33:32.985 00:33:32.985 --- 10.0.0.1 ping statistics --- 00:33:32.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.985 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:32.985 only one NIC for nvmf test 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:32.985 rmmod nvme_tcp 00:33:32.985 rmmod nvme_fabrics 00:33:32.985 rmmod nvme_keyring 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.985 20:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.374 00:33:34.374 real 0m9.798s 00:33:34.374 user 0m2.132s 00:33:34.374 sys 0m5.613s 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.374 20:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:34.374 ************************************ 00:33:34.374 END TEST nvmf_target_multipath 00:33:34.375 ************************************ 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:34.375 ************************************ 00:33:34.375 START TEST nvmf_zcopy 00:33:34.375 ************************************ 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:34.375 * Looking for test storage... 00:33:34.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.375 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.637 --rc genhtml_branch_coverage=1 00:33:34.637 --rc genhtml_function_coverage=1 00:33:34.637 --rc genhtml_legend=1 00:33:34.637 --rc geninfo_all_blocks=1 00:33:34.637 --rc geninfo_unexecuted_blocks=1 00:33:34.637 00:33:34.637 ' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.637 --rc genhtml_branch_coverage=1 00:33:34.637 --rc genhtml_function_coverage=1 00:33:34.637 --rc genhtml_legend=1 00:33:34.637 --rc geninfo_all_blocks=1 00:33:34.637 --rc geninfo_unexecuted_blocks=1 00:33:34.637 00:33:34.637 ' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.637 --rc genhtml_branch_coverage=1 00:33:34.637 --rc genhtml_function_coverage=1 00:33:34.637 --rc genhtml_legend=1 00:33:34.637 --rc geninfo_all_blocks=1 00:33:34.637 --rc geninfo_unexecuted_blocks=1 00:33:34.637 00:33:34.637 ' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.637 --rc genhtml_branch_coverage=1 00:33:34.637 --rc genhtml_function_coverage=1 00:33:34.637 --rc genhtml_legend=1 00:33:34.637 --rc geninfo_all_blocks=1 00:33:34.637 --rc geninfo_unexecuted_blocks=1 00:33:34.637 00:33:34.637 ' 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:34.637 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.638 20:11:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:42.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:42.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.784 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:42.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:42.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:33:42.785 00:33:42.785 --- 10.0.0.2 ping statistics --- 00:33:42.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.785 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:33:42.785 00:33:42.785 --- 10.0.0.1 ping statistics --- 00:33:42.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.785 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3915528 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3915528 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3915528 ']' 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.785 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.786 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.786 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.786 20:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 [2024-11-26 20:11:42.912017] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:42.786 [2024-11-26 20:11:42.913553] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:33:42.786 [2024-11-26 20:11:42.913610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.786 [2024-11-26 20:11:43.014910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.786 [2024-11-26 20:11:43.065248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.786 [2024-11-26 20:11:43.065302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.786 [2024-11-26 20:11:43.065311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.786 [2024-11-26 20:11:43.065318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.786 [2024-11-26 20:11:43.065324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.786 [2024-11-26 20:11:43.066080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.786 [2024-11-26 20:11:43.144240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:42.786 [2024-11-26 20:11:43.144539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 [2024-11-26 20:11:43.234953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 [2024-11-26 20:11:43.263339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 malloc0 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:42.786 { 00:33:42.786 "params": { 00:33:42.786 "name": "Nvme$subsystem", 00:33:42.786 "trtype": "$TEST_TRANSPORT", 00:33:42.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:42.786 "adrfam": "ipv4", 00:33:42.786 "trsvcid": "$NVMF_PORT", 00:33:42.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:42.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:42.786 "hdgst": ${hdgst:-false}, 00:33:42.786 "ddgst": ${ddgst:-false} 00:33:42.786 }, 00:33:42.786 "method": "bdev_nvme_attach_controller" 00:33:42.786 } 00:33:42.786 EOF 00:33:42.786 )") 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:42.786 20:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:42.786 "params": { 00:33:42.786 "name": "Nvme1", 00:33:42.786 "trtype": "tcp", 00:33:42.786 "traddr": "10.0.0.2", 00:33:42.786 "adrfam": "ipv4", 00:33:42.786 "trsvcid": "4420", 00:33:42.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:42.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:42.786 "hdgst": false, 00:33:42.786 "ddgst": false 00:33:42.786 }, 00:33:42.786 "method": "bdev_nvme_attach_controller" 00:33:42.786 }' 00:33:42.786 [2024-11-26 20:11:43.366941] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:33:42.786 [2024-11-26 20:11:43.367008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915745 ] 00:33:42.786 [2024-11-26 20:11:43.458113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.786 [2024-11-26 20:11:43.510873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.048 Running I/O for 10 seconds... 00:33:45.376 6405.00 IOPS, 50.04 MiB/s [2024-11-26T19:11:47.140Z] 6443.00 IOPS, 50.34 MiB/s [2024-11-26T19:11:48.083Z] 6463.33 IOPS, 50.49 MiB/s [2024-11-26T19:11:49.027Z] 6475.00 IOPS, 50.59 MiB/s [2024-11-26T19:11:49.970Z] 6893.40 IOPS, 53.85 MiB/s [2024-11-26T19:11:50.910Z] 7351.67 IOPS, 57.43 MiB/s [2024-11-26T19:11:52.295Z] 7671.29 IOPS, 59.93 MiB/s [2024-11-26T19:11:52.865Z] 7912.38 IOPS, 61.82 MiB/s [2024-11-26T19:11:54.336Z] 8101.67 IOPS, 63.29 MiB/s [2024-11-26T19:11:54.336Z] 8254.80 IOPS, 64.49 MiB/s 00:33:53.515 Latency(us) 00:33:53.515 [2024-11-26T19:11:54.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.516 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:53.516 Verification LBA range: start 0x0 length 0x1000 00:33:53.516 Nvme1n1 : 10.01 8258.32 64.52 0.00 0.00 15454.65 2580.48 27743.57 00:33:53.516 [2024-11-26T19:11:54.337Z] =================================================================================================================== 00:33:53.516 [2024-11-26T19:11:54.337Z] Total : 8258.32 64.52 0.00 0.00 15454.65 2580.48 27743.57 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3917683 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.516 { 00:33:53.516 "params": { 00:33:53.516 "name": "Nvme$subsystem", 00:33:53.516 "trtype": "$TEST_TRANSPORT", 00:33:53.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.516 "adrfam": "ipv4", 00:33:53.516 "trsvcid": "$NVMF_PORT", 00:33:53.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.516 "hdgst": ${hdgst:-false}, 00:33:53.516 "ddgst": ${ddgst:-false} 00:33:53.516 }, 00:33:53.516 "method": "bdev_nvme_attach_controller" 00:33:53.516 } 00:33:53.516 EOF 00:33:53.516 )") 00:33:53.516 [2024-11-26 20:11:53.982516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:53.982547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:53.516 20:11:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.516 "params": { 00:33:53.516 "name": "Nvme1", 00:33:53.516 "trtype": "tcp", 00:33:53.516 "traddr": "10.0.0.2", 00:33:53.516 "adrfam": "ipv4", 00:33:53.516 "trsvcid": "4420", 00:33:53.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.516 "hdgst": false, 00:33:53.516 "ddgst": false 00:33:53.516 }, 00:33:53.516 "method": "bdev_nvme_attach_controller" 00:33:53.516 }' 00:33:53.516 [2024-11-26 20:11:53.994474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:53.994485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.006472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.006480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.018472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.018480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.027640] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:33:53.516 [2024-11-26 20:11:54.027691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917683 ] 00:33:53.516 [2024-11-26 20:11:54.030471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.030480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.042472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.042480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.054472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.054480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.066471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.066479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.078472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.078481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.090471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.090479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.102471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.102484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.108967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.516 [2024-11-26 20:11:54.114471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.114479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.126472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.126482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.138472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.138482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.138732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.516 [2024-11-26 20:11:54.150473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.150483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.162476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.162488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.174476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.174486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.186474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.186483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.198470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.198478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.210482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.210499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.222474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.222485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.234473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.234484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.246471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.246480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.258471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.258479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.270470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.270478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.282472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.282482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.294471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.294481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.306471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.306478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.516 [2024-11-26 20:11:54.318471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.516 [2024-11-26 20:11:54.318482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.330472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.330482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.342471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.342479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.354471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.354478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.366471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.366478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.378471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.378480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.390471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.390478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.402471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.402478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.414470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.414479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.426479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.426495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 Running I/O for 5 seconds... 00:33:53.870 [2024-11-26 20:11:54.438474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.438487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.453770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.453787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.466966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.466982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.481785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.481801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.494768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.494783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.509411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.509427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.522875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.522890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.537851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.537867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.550753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.550767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.565582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.565602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.578886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.578902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.593984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.594000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.870 [2024-11-26 20:11:54.607093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.870 [2024-11-26 20:11:54.607109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.871 [2024-11-26 20:11:54.621931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.871 [2024-11-26 20:11:54.621947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.871 [2024-11-26 20:11:54.635271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.871 [2024-11-26 20:11:54.635286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.871 [2024-11-26 20:11:54.650517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.871 [2024-11-26 20:11:54.650532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.871 [2024-11-26 20:11:54.663632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.871 [2024-11-26 20:11:54.663647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:53.871 [2024-11-26 20:11:54.677820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:53.871 [2024-11-26 20:11:54.677835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.691057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.691073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.706270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.706285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.719456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.719470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.733795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.733810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.746771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.746786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.761457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.761472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.774452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.774467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.787326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.787341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.801370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.801386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.814658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.814673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.827403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.827423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.841859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.841875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.854784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.854799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.869686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.869702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.882564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.882579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.895562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.895577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.909962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.909977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.923222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.923237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.132 [2024-11-26 20:11:54.937622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.132 [2024-11-26 20:11:54.937638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:54.950840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:54.950856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:54.965676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:54.965691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:54.978825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:54.978839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:54.993693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:54.993708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.006904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.006918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.021823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.021838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.034690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.034705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.047498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.047513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.061814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.061829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.075047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.075062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.089827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.392 [2024-11-26 20:11:55.089842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.392 [2024-11-26 20:11:55.103046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.103060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.117460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.117475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.130687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.130702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.143607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.143622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.157679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.157695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.170536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.170551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.184105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.184119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.393 [2024-11-26 20:11:55.198053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.393 [2024-11-26 20:11:55.198068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.211317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.211332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.225256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.225271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.238357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.238372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.251772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.251787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.266161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.266176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.279304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.279319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.294196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.294212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.307100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.307115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.321228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.321244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.334108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.334125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.347472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.347487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.361860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.361875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.374780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.374795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.389498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.389513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.402339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.402353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.415341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.415356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.429673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.429689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 18965.00 IOPS, 148.16 MiB/s [2024-11-26T19:11:55.475Z] [2024-11-26 20:11:55.443058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.443073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.457689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.457704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.654 [2024-11-26 20:11:55.471067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.654 [2024-11-26 20:11:55.471083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.486401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.486419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.499599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.499614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.514194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.514210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.527329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.527344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.542073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.542089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.555270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.555285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.569379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.569395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.582651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.582666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.595943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.595958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.609903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.609919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.623092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.623108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.637346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.637362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.650275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.650291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.663980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.663995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.677921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.677937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.690954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.690970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.705463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.705479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.718589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.718604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:54.915 [2024-11-26 20:11:55.731654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:54.915 [2024-11-26 20:11:55.731671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.745712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.745728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.759279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.759294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.773837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.773853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.787117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.787132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.801838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.801854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.814993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.815008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.830052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.830068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.843066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.843081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.857873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.857893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.871293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.871308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.885814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.885830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.899027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.899042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.913398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.913413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.926470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.926486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.939460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.939475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.954135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.954151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.967170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.967185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.176 [2024-11-26 20:11:55.981340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.176 [2024-11-26 20:11:55.981356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:55.994472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:55.994488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.007267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.007283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.021753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.021768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.035064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.035079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.049304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.049319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.062285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.062301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.074895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.074909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.089618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.089633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.102906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.102921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.117538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.117559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.130671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.130686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.143409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.143424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.157286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.157301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.170752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.170767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.186101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.186117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.199548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.199564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.213857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.213873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.226782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.226797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.437 [2024-11-26 20:11:56.241857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.437 [2024-11-26 20:11:56.241872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.255006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.255021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.269661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.269677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.282742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.282756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.297500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.297515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.310392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.310407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.323027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.323042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.337651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.337667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.350806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.350821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.365627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.365642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.378660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.378680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.391432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.391447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.405804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.405819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.419022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.419037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.433446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.433461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 18945.50 IOPS, 148.01 MiB/s [2024-11-26T19:11:56.518Z] [2024-11-26 20:11:56.446752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.446766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.461555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.461570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.474798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.474812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.489574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.489589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.697 [2024-11-26 20:11:56.502672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.697 [2024-11-26 20:11:56.502687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.515565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.515579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.529568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.529583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.543022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.543038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.558243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.558258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.571546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.571561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.585396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.585412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.598571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.598586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.611416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.611431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.625735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.957 [2024-11-26 20:11:56.625750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.957 [2024-11-26 20:11:56.638782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.638797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.653635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.653650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.666813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.666828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.682109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.682125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.695369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.695384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.710306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.710322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.723773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.723789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.737767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.737783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.750977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.750992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.958 [2024-11-26 20:11:56.765136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.958 [2024-11-26 20:11:56.765151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.778273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.778289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.791129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.791143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.806113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.806128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.819125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.819139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.833843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.833858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.847254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.847269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.861605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.861620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.874761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.874775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.890135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.890151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.903104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.903119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.917637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.917652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.931036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.931051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.945547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.945562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.958766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.958780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.973575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.973589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.986640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.986655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:56.999588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:56.999603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:57.013805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:57.013821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.218 [2024-11-26 20:11:57.027077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.218 [2024-11-26 20:11:57.027092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.042095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.042111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.055145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.055165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.069786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.069802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.083096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.083111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.097501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.097517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.110663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.110678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.123870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.123885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.138095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.138111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.151296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.151312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.165890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.165905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.178992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.179007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.193326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.193341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.206865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.206880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.221795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.221811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.235004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.235020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.249462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.249478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.262653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.262668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.275583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.275598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.478 [2024-11-26 20:11:57.289632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.478 [2024-11-26 20:11:57.289647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.302935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.302951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.317614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.317630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.330974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.330990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.345692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.345709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.358839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.358854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.373775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.373791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.386774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.386788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.401628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.401644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.414739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.414754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.430478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.430494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.443791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.443807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 18925.00 IOPS, 147.85 MiB/s [2024-11-26T19:11:57.561Z] [2024-11-26 20:11:57.457796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.457812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.471031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.471047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.485534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.485550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.740 [2024-11-26 20:11:57.499214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.740 [2024-11-26 20:11:57.499229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.741 [2024-11-26 20:11:57.513437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.741 [2024-11-26 20:11:57.513452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.741 [2024-11-26 20:11:57.526636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.741 [2024-11-26 20:11:57.526651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.741 [2024-11-26 20:11:57.539341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.741 [2024-11-26 20:11:57.539356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.741 [2024-11-26 20:11:57.553724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.741 [2024-11-26 20:11:57.553739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.000 [2024-11-26 20:11:57.567128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.000 [2024-11-26 20:11:57.567143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.000 [2024-11-26 20:11:57.581780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.000 [2024-11-26 20:11:57.581796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.000 [2024-11-26 20:11:57.594880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.000 [2024-11-26 20:11:57.594895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.000 [2024-11-26 20:11:57.609724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.000 [2024-11-26 20:11:57.609740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.000 [2024-11-26 20:11:57.622870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.000 [2024-11-26 20:11:57.622884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.637645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.637661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.650923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.650938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.665582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.665598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.678769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.678789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.694146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.694167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.707538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.707553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.721691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.721707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.734868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.734883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.749917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.749933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.763132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.763147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.777445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.777460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.790539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.790554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.803239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.803254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.001 [2024-11-26 20:11:57.817969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.001 [2024-11-26 20:11:57.817984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.831101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.831116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.846148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.846168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.859200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.859216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.873122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.873136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.886470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.886485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.899651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.899665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.913433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.913448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.926687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.926702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.939584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.939604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.953621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.953636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.966477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.966492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.979998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.980013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:57.994030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:57.994046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.006944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.006958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.021451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.021467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.034648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.034664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.047573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.047588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.061616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.061632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.262 [2024-11-26 20:11:58.074828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.262 [2024-11-26 20:11:58.074842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.089561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.089577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.102948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.102964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.117564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.117580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.130877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.130891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.145579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.145595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.158846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.158861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.173647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.173663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.186534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.186550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.199747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.199766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.213687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.213702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.227054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.227069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.241862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.241878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.254892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.254907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.268998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.269013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.282276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.282291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.295218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.295232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.310430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.310446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.323504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.323519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.523 [2024-11-26 20:11:58.337706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.523 [2024-11-26 20:11:58.337722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.350866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.350882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.366176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.366192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.379557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.379572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.393517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.393532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.406537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.406552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.419834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.419849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.433697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.433712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.446787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.446801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 18931.25 IOPS, 147.90 MiB/s [2024-11-26T19:11:58.605Z] [2024-11-26 20:11:58.461594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.461610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.474621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.474636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.487357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.487371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.501988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.502003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.515170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.515185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.530155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.530175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.543386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.543401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.557555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.557571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.570696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.570712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.583262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.583277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.784 [2024-11-26 20:11:58.597881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.784 [2024-11-26 20:11:58.597896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.610990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.611006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.626127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.626143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.639333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.639348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.653802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.653817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.666772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.666787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.682049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.682065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.694949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.694964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.709842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.709857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.723135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.723150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.737418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.737433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.750545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.750560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.763427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.763442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.777473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.777489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.790220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.790236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.802906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.802921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.817600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.817615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.830777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.830792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.845327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.845343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.046 [2024-11-26 20:11:58.858561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.046 [2024-11-26 20:11:58.858577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.871173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.871189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.885734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.885751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.899025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.899040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.913543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.913558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.926810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.926825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.942086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.942102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.955112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.955127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.969997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.970021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.983203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.983219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:58.997776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:58.997792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.010462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.010478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.023312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.023328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.037520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.037537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.050553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.050569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.063272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.063288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.077907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.077923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.091253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.091269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.105882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.105898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.308 [2024-11-26 20:11:59.119047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.308 [2024-11-26 20:11:59.119063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.133338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.133354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.146108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.146123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.159374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.159390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.173776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.173792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.187163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.187179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.201852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.201868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.214901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.214916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.229582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.229601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.242629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.242645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.255830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.255845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.269735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.269750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.282934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.282949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.298063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.298078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.311601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.311616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.325748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.325764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.339050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.339066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.353684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.353700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.366838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.366854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.574 [2024-11-26 20:11:59.381649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.574 [2024-11-26 20:11:59.381665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 [2024-11-26 20:11:59.394896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.837 [2024-11-26 20:11:59.394912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 [2024-11-26 20:11:59.410057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.837 [2024-11-26 20:11:59.410073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 [2024-11-26 20:11:59.423226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.837 [2024-11-26 20:11:59.423243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 [2024-11-26 20:11:59.437841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.837 [2024-11-26 20:11:59.437857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 [2024-11-26 20:11:59.450647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.837 [2024-11-26 20:11:59.450663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.837 18944.40 IOPS, 148.00 MiB/s 00:33:58.837 Latency(us) 00:33:58.837 [2024-11-26T19:11:59.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.837 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:58.837 Nvme1n1 : 5.01 18947.53 148.03 0.00 0.00 6750.13 2662.40 11359.57 00:33:58.837 [2024-11-26T19:11:59.658Z] =================================================================================================================== 00:33:58.837 [2024-11-26T19:11:59.659Z] Total : 18947.53 148.03 0.00 0.00 6750.13 2662.40 11359.57 00:33:58.838 [2024-11-26 20:11:59.458479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.458494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.470475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.470488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.482481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.482492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.494477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.494489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.506476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.506487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.518473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.518482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.530472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.530481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.542477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.542489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 [2024-11-26 20:11:59.554471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.838 [2024-11-26 20:11:59.554480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3917683) - No such process 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3917683 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.838 delay0 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.838 20:11:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:59.098 [2024-11-26 20:11:59.772218] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:07.236 Initializing NVMe Controllers 00:34:07.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:07.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:07.236 Initialization complete. Launching workers. 00:34:07.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6645 00:34:07.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6929, failed to submit 36 00:34:07.236 success 6778, unsuccessful 151, failed 0 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:07.236 rmmod nvme_tcp 00:34:07.236 rmmod nvme_fabrics 00:34:07.236 rmmod nvme_keyring 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3915528 ']' 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3915528 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3915528 ']' 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3915528 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915528 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915528' 00:34:07.236 killing process with pid 3915528 00:34:07.236 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3915528 00:34:07.237 20:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3915528 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.237 20:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.620 00:34:08.620 real 0m34.038s 00:34:08.620 user 0m44.001s 00:34:08.620 sys 0m12.722s 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.620 ************************************ 00:34:08.620 END TEST nvmf_zcopy 00:34:08.620 ************************************ 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:08.620 ************************************ 00:34:08.620 START TEST nvmf_nmic 00:34:08.620 ************************************ 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:08.620 * Looking for test storage... 00:34:08.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.620 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.621 --rc genhtml_branch_coverage=1 00:34:08.621 --rc genhtml_function_coverage=1 00:34:08.621 --rc genhtml_legend=1 00:34:08.621 --rc geninfo_all_blocks=1 00:34:08.621 --rc geninfo_unexecuted_blocks=1 00:34:08.621 00:34:08.621 ' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.621 --rc genhtml_branch_coverage=1 00:34:08.621 --rc genhtml_function_coverage=1 00:34:08.621 --rc genhtml_legend=1 00:34:08.621 --rc geninfo_all_blocks=1 00:34:08.621 --rc geninfo_unexecuted_blocks=1 00:34:08.621 00:34:08.621 ' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.621 --rc genhtml_branch_coverage=1 00:34:08.621 --rc genhtml_function_coverage=1 00:34:08.621 --rc genhtml_legend=1 00:34:08.621 --rc geninfo_all_blocks=1 00:34:08.621 --rc geninfo_unexecuted_blocks=1 00:34:08.621 00:34:08.621 ' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.621 --rc genhtml_branch_coverage=1 00:34:08.621 --rc genhtml_function_coverage=1 00:34:08.621 --rc genhtml_legend=1 00:34:08.621 --rc geninfo_all_blocks=1 00:34:08.621 --rc geninfo_unexecuted_blocks=1 00:34:08.621 00:34:08.621 ' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.621 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.882 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.882 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.882 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.882 20:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.024 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:17.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:17.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:17.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:17.025 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:34:17.025 00:34:17.025 --- 10.0.0.2 ping statistics --- 00:34:17.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.025 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:34:17.025 00:34:17.025 --- 10.0.0.1 ping statistics --- 00:34:17.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.025 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3924747 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3924747 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3924747 ']' 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.025 20:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.025 [2024-11-26 20:12:17.025427] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:17.025 [2024-11-26 20:12:17.026539] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:34:17.025 [2024-11-26 20:12:17.026591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.026 [2024-11-26 20:12:17.129030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.026 [2024-11-26 20:12:17.184971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.026 [2024-11-26 20:12:17.185026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.026 [2024-11-26 20:12:17.185035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.026 [2024-11-26 20:12:17.185042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.026 [2024-11-26 20:12:17.185049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.026 [2024-11-26 20:12:17.187109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.026 [2024-11-26 20:12:17.187147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.026 [2024-11-26 20:12:17.187285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.026 [2024-11-26 20:12:17.187287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.026 [2024-11-26 20:12:17.266237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:17.026 [2024-11-26 20:12:17.266312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:17.026 [2024-11-26 20:12:17.267218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:17.026 [2024-11-26 20:12:17.267390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.026 [2024-11-26 20:12:17.267494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 [2024-11-26 20:12:17.896365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 Malloc0 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 [2024-11-26 20:12:17.992716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:17.287 test case1: single bdev can't be used in multiple subsystems 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 [2024-11-26 20:12:18.027962] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:17.287 [2024-11-26 20:12:18.027988] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:17.287 [2024-11-26 20:12:18.027997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.287 request: 00:34:17.287 { 00:34:17.287 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:17.287 "namespace": { 00:34:17.287 "bdev_name": "Malloc0", 00:34:17.287 "no_auto_visible": false, 00:34:17.287 "hide_metadata": false 00:34:17.287 }, 00:34:17.287 "method": "nvmf_subsystem_add_ns", 00:34:17.287 "req_id": 1 00:34:17.287 } 00:34:17.287 Got JSON-RPC error response 00:34:17.287 response: 00:34:17.287 { 00:34:17.287 "code": -32602, 00:34:17.287 "message": "Invalid parameters" 00:34:17.287 } 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:17.287 Adding namespace failed - expected result. 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:17.287 test case2: host connect to nvmf target in multiple paths 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:17.287 [2024-11-26 20:12:18.040120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.287 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:17.858 20:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:18.429 20:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:18.429 20:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:18.429 20:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:18.429 20:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:18.429 20:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:20.349 20:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:20.349 [global] 00:34:20.349 thread=1 00:34:20.349 invalidate=1 00:34:20.349 rw=write 00:34:20.349 time_based=1 00:34:20.349 runtime=1 00:34:20.349 ioengine=libaio 00:34:20.349 direct=1 00:34:20.349 bs=4096 00:34:20.349 iodepth=1 00:34:20.349 norandommap=0 00:34:20.349 numjobs=1 00:34:20.349 00:34:20.349 verify_dump=1 00:34:20.349 verify_backlog=512 00:34:20.349 verify_state_save=0 00:34:20.349 do_verify=1 00:34:20.349 verify=crc32c-intel 00:34:20.349 [job0] 00:34:20.349 filename=/dev/nvme0n1 00:34:20.349 Could not set queue depth (nvme0n1) 00:34:20.610 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:20.610 fio-3.35 00:34:20.610 Starting 1 thread 00:34:21.994 00:34:21.994 job0: (groupid=0, jobs=1): err= 0: pid=3925896: Tue Nov 26 20:12:22 2024 00:34:21.994 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:21.994 slat (nsec): min=6684, max=60736, avg=27180.80, stdev=4664.92 00:34:21.994 clat (usec): min=429, max=1019, avg=811.16, stdev=129.18 00:34:21.994 lat (usec): min=457, max=1046, avg=838.34, stdev=129.17 00:34:21.994 clat percentiles (usec): 00:34:21.994 | 1.00th=[ 510], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 701], 00:34:21.994 | 30.00th=[ 750], 40.00th=[ 807], 50.00th=[ 848], 60.00th=[ 898], 00:34:21.994 | 70.00th=[ 906], 80.00th=[ 922], 90.00th=[ 947], 95.00th=[ 955], 00:34:21.994 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:21.994 | 99.99th=[ 1020] 00:34:21.994 write: IOPS=954, BW=3816KiB/s (3908kB/s)(3820KiB/1001msec); 0 zone resets 00:34:21.994 slat (usec): min=9, max=29857, avg=65.09, stdev=965.11 00:34:21.994 clat (usec): min=164, max=798, avg=519.32, stdev=108.09 00:34:21.994 lat (usec): min=174, max=30481, avg=584.41, stdev=974.92 00:34:21.994 clat percentiles (usec): 00:34:21.994 | 1.00th=[ 262], 5.00th=[ 355], 10.00th=[ 392], 20.00th=[ 416], 00:34:21.994 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 545], 00:34:21.994 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 693], 00:34:21.994 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 799], 99.95th=[ 799], 00:34:21.994 | 99.99th=[ 799] 00:34:21.994 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.994 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.994 lat (usec) : 250=0.48%, 500=26.79%, 750=47.85%, 1000=24.68% 00:34:21.994 lat (msec) : 2=0.20% 00:34:21.994 cpu : usr=3.90%, sys=5.20%, ctx=1471, majf=0, minf=1 00:34:21.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.994 issued rwts: total=512,955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.995 00:34:21.995 Run status group 0 (all jobs): 00:34:21.995 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:34:21.995 WRITE: bw=3816KiB/s (3908kB/s), 3816KiB/s-3816KiB/s (3908kB/s-3908kB/s), io=3820KiB (3912kB), run=1001-1001msec 00:34:21.995 00:34:21.995 Disk stats (read/write): 00:34:21.995 nvme0n1: ios=537/779, merge=0/0, ticks=1320/271, in_queue=1591, util=98.60% 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:21.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.995 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.995 rmmod nvme_tcp 00:34:21.995 rmmod nvme_fabrics 00:34:21.995 rmmod nvme_keyring 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3924747 ']' 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3924747 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3924747 ']' 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3924747 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:22.255 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3924747 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3924747' 00:34:22.256 killing process with pid 3924747 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3924747 00:34:22.256 20:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3924747 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.256 20:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.804 00:34:24.804 real 0m15.920s 00:34:24.804 user 0m37.957s 00:34:24.804 sys 0m7.418s 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:24.804 ************************************ 00:34:24.804 END TEST nvmf_nmic 00:34:24.804 ************************************ 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:24.804 ************************************ 00:34:24.804 START TEST nvmf_fio_target 00:34:24.804 ************************************ 00:34:24.804 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:24.804 * Looking for test storage... 00:34:24.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.805 --rc genhtml_branch_coverage=1 00:34:24.805 --rc genhtml_function_coverage=1 00:34:24.805 --rc genhtml_legend=1 00:34:24.805 --rc geninfo_all_blocks=1 00:34:24.805 --rc geninfo_unexecuted_blocks=1 00:34:24.805 00:34:24.805 ' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.805 --rc genhtml_branch_coverage=1 00:34:24.805 --rc genhtml_function_coverage=1 00:34:24.805 --rc genhtml_legend=1 00:34:24.805 --rc geninfo_all_blocks=1 00:34:24.805 --rc geninfo_unexecuted_blocks=1 00:34:24.805 00:34:24.805 ' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.805 --rc genhtml_branch_coverage=1 00:34:24.805 --rc genhtml_function_coverage=1 00:34:24.805 --rc genhtml_legend=1 00:34:24.805 --rc geninfo_all_blocks=1 00:34:24.805 --rc geninfo_unexecuted_blocks=1 00:34:24.805 00:34:24.805 ' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.805 --rc genhtml_branch_coverage=1 00:34:24.805 --rc genhtml_function_coverage=1 00:34:24.805 --rc genhtml_legend=1 00:34:24.805 --rc geninfo_all_blocks=1 00:34:24.805 --rc geninfo_unexecuted_blocks=1 00:34:24.805 00:34:24.805 ' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.805 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:24.806 20:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:32.946 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:32.946 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:32.946 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:32.946 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:34:32.946 00:34:32.946 --- 10.0.0.2 ping statistics --- 00:34:32.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.946 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:34:32.946 00:34:32.946 --- 10.0.0.1 ping statistics --- 00:34:32.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.946 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.946 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3930278 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3930278 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3930278 ']' 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.947 20:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.947 [2024-11-26 20:12:33.035442] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:32.947 [2024-11-26 20:12:33.036565] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:34:32.947 [2024-11-26 20:12:33.036615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.947 [2024-11-26 20:12:33.136745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.947 [2024-11-26 20:12:33.189832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.947 [2024-11-26 20:12:33.189883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.947 [2024-11-26 20:12:33.189891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.947 [2024-11-26 20:12:33.189903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.947 [2024-11-26 20:12:33.189910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.947 [2024-11-26 20:12:33.192148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.947 [2024-11-26 20:12:33.192310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.947 [2024-11-26 20:12:33.192542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.947 [2024-11-26 20:12:33.192542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.947 [2024-11-26 20:12:33.271042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:32.947 [2024-11-26 20:12:33.272215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:32.947 [2024-11-26 20:12:33.272375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:32.947 [2024-11-26 20:12:33.272695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:32.947 [2024-11-26 20:12:33.272727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.207 20:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:33.468 [2024-11-26 20:12:34.069495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.468 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.728 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:33.728 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.988 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:33.988 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:33.988 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:33.988 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:34.248 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:34.248 20:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:34.509 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:34.770 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:34.770 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:34.770 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:34.770 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:35.031 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:35.031 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:35.292 20:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:35.552 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:35.552 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:35.553 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:35.553 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:35.813 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.073 [2024-11-26 20:12:36.677408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.073 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:36.333 20:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:36.333 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:36.904 20:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:38.815 20:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:38.815 [global] 00:34:38.815 thread=1 00:34:38.815 invalidate=1 00:34:38.815 rw=write 00:34:38.815 time_based=1 00:34:38.815 runtime=1 00:34:38.815 ioengine=libaio 00:34:38.815 direct=1 00:34:38.815 bs=4096 00:34:38.815 iodepth=1 00:34:38.815 norandommap=0 00:34:38.815 numjobs=1 00:34:38.815 00:34:38.815 verify_dump=1 00:34:38.815 verify_backlog=512 00:34:38.815 verify_state_save=0 00:34:38.815 do_verify=1 00:34:38.815 verify=crc32c-intel 00:34:39.075 [job0] 00:34:39.075 filename=/dev/nvme0n1 00:34:39.075 [job1] 00:34:39.075 filename=/dev/nvme0n2 00:34:39.075 [job2] 00:34:39.075 filename=/dev/nvme0n3 00:34:39.075 [job3] 00:34:39.075 filename=/dev/nvme0n4 00:34:39.075 Could not set queue depth (nvme0n1) 00:34:39.075 Could not set queue depth (nvme0n2) 00:34:39.075 Could not set queue depth (nvme0n3) 00:34:39.075 Could not set queue depth (nvme0n4) 00:34:39.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.350 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.350 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.350 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:39.350 fio-3.35 00:34:39.350 Starting 4 threads 00:34:40.737 00:34:40.737 job0: (groupid=0, jobs=1): err= 0: pid=3931858: Tue Nov 26 20:12:41 2024 00:34:40.737 read: IOPS=15, BW=63.9KiB/s (65.5kB/s)(64.0KiB/1001msec) 00:34:40.737 slat (nsec): min=26101, max=26960, avg=26473.56, stdev=238.84 00:34:40.737 clat (usec): min=40977, max=42117, avg=41717.96, stdev=417.74 00:34:40.737 lat (usec): min=41003, max=42144, avg=41744.43, stdev=417.69 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:40.737 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:40.737 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:40.737 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:40.737 | 99.99th=[42206] 00:34:40.737 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:40.737 slat (usec): min=9, max=21520, avg=76.03, stdev=950.53 00:34:40.737 clat (usec): min=113, max=3357, avg=567.71, stdev=212.87 00:34:40.737 lat (usec): min=125, max=21862, avg=643.74, stdev=965.11 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 265], 5.00th=[ 330], 10.00th=[ 363], 20.00th=[ 441], 00:34:40.737 | 30.00th=[ 486], 40.00th=[ 519], 50.00th=[ 553], 60.00th=[ 603], 00:34:40.737 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 816], 00:34:40.737 | 99.00th=[ 906], 99.50th=[ 1045], 99.90th=[ 3359], 99.95th=[ 3359], 00:34:40.737 | 99.99th=[ 3359] 00:34:40.737 bw ( KiB/s): min= 4096, max= 4096, per=40.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.737 lat (usec) : 250=0.38%, 500=32.95%, 750=55.30%, 1000=7.77% 00:34:40.737 lat (msec) : 2=0.19%, 4=0.38%, 50=3.03% 00:34:40.737 cpu : usr=0.90%, sys=1.50%, ctx=531, majf=0, minf=1 00:34:40.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.737 job1: (groupid=0, jobs=1): err= 0: pid=3931859: Tue Nov 26 20:12:41 2024 00:34:40.737 read: IOPS=754, BW=3017KiB/s (3089kB/s)(3020KiB/1001msec) 00:34:40.737 slat (nsec): min=7155, max=62353, avg=25757.79, stdev=6099.37 00:34:40.737 clat (usec): min=259, max=1145, avg=803.30, stdev=156.18 00:34:40.737 lat (usec): min=267, max=1171, avg=829.06, stdev=157.25 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 379], 5.00th=[ 502], 10.00th=[ 594], 20.00th=[ 676], 00:34:40.737 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 824], 60.00th=[ 865], 00:34:40.737 | 70.00th=[ 898], 80.00th=[ 938], 90.00th=[ 979], 95.00th=[ 1020], 00:34:40.737 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:40.737 | 99.99th=[ 1139] 00:34:40.737 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:40.737 slat (usec): min=3, max=1185, avg=21.29, stdev=39.06 00:34:40.737 clat (usec): min=103, max=792, avg=333.42, stdev=160.79 00:34:40.737 lat (usec): min=107, max=1604, avg=354.71, stdev=174.14 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 135], 00:34:40.737 | 30.00th=[ 235], 40.00th=[ 297], 50.00th=[ 326], 60.00th=[ 371], 00:34:40.737 | 70.00th=[ 420], 80.00th=[ 482], 90.00th=[ 553], 95.00th=[ 611], 00:34:40.737 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 791], 99.95th=[ 791], 00:34:40.737 | 99.99th=[ 791] 00:34:40.737 bw ( KiB/s): min= 4087, max= 4087, per=39.95%, avg=4087.00, stdev= 0.00, samples=1 00:34:40.737 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:34:40.737 lat (usec) : 250=18.44%, 500=31.37%, 750=20.91%, 1000=26.14% 00:34:40.737 lat (msec) : 2=3.15% 00:34:40.737 cpu : usr=1.90%, sys=4.30%, ctx=1784, majf=0, minf=1 00:34:40.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 issued rwts: total=755,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.737 job2: (groupid=0, jobs=1): err= 0: pid=3931860: Tue Nov 26 20:12:41 2024 00:34:40.737 read: IOPS=262, BW=1051KiB/s (1076kB/s)(1052KiB/1001msec) 00:34:40.737 slat (nsec): min=9483, max=46038, avg=26930.85, stdev=2911.73 00:34:40.737 clat (usec): min=909, max=41251, avg=2414.12, stdev=6873.06 00:34:40.737 lat (usec): min=935, max=41278, avg=2441.05, stdev=6873.01 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 938], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1156], 00:34:40.737 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:34:40.737 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1352], 00:34:40.737 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:40.737 | 99.99th=[41157] 00:34:40.737 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:40.737 slat (usec): min=4, max=2144, avg=34.85, stdev=94.10 00:34:40.737 clat (usec): min=223, max=1360, avg=653.71, stdev=142.43 00:34:40.737 lat (usec): min=237, max=2716, avg=688.57, stdev=170.46 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 537], 00:34:40.737 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:34:40.737 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 865], 00:34:40.737 | 99.00th=[ 1029], 99.50th=[ 1139], 99.90th=[ 1369], 99.95th=[ 1369], 00:34:40.737 | 99.99th=[ 1369] 00:34:40.737 bw ( KiB/s): min= 4096, max= 4096, per=40.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.737 lat (usec) : 250=0.26%, 500=10.06%, 750=40.52%, 1000=15.23% 00:34:40.737 lat (msec) : 2=32.90%, 50=1.03% 00:34:40.737 cpu : usr=0.90%, sys=2.50%, ctx=779, majf=0, minf=1 00:34:40.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 issued rwts: total=263,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.737 job3: (groupid=0, jobs=1): err= 0: pid=3931861: Tue Nov 26 20:12:41 2024 00:34:40.737 read: IOPS=499, BW=1998KiB/s (2046kB/s)(2000KiB/1001msec) 00:34:40.737 slat (nsec): min=5485, max=34606, avg=11212.18, stdev=7165.55 00:34:40.737 clat (usec): min=422, max=42006, avg=1301.52, stdev=4833.64 00:34:40.737 lat (usec): min=430, max=42033, avg=1312.73, stdev=4835.30 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 494], 5.00th=[ 603], 10.00th=[ 660], 20.00th=[ 685], 00:34:40.737 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 734], 00:34:40.737 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 873], 00:34:40.737 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:40.737 | 99.99th=[42206] 00:34:40.737 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:40.737 slat (nsec): min=3569, max=68529, avg=30264.07, stdev=11355.37 00:34:40.737 clat (usec): min=120, max=1351, avg=631.19, stdev=152.83 00:34:40.737 lat (usec): min=132, max=1386, avg=661.45, stdev=156.75 00:34:40.737 clat percentiles (usec): 00:34:40.737 | 1.00th=[ 281], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 498], 00:34:40.737 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 685], 00:34:40.737 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:34:40.737 | 99.00th=[ 963], 99.50th=[ 1057], 99.90th=[ 1352], 99.95th=[ 1352], 00:34:40.737 | 99.99th=[ 1352] 00:34:40.737 bw ( KiB/s): min= 4096, max= 4096, per=40.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:40.737 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:40.737 lat (usec) : 250=0.20%, 500=10.47%, 750=63.04%, 1000=25.20% 00:34:40.737 lat (msec) : 2=0.40%, 50=0.69% 00:34:40.737 cpu : usr=0.70%, sys=2.50%, ctx=1013, majf=0, minf=1 00:34:40.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.737 issued rwts: total=500,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:40.737 00:34:40.737 Run status group 0 (all jobs): 00:34:40.737 READ: bw=6130KiB/s (6277kB/s), 63.9KiB/s-3017KiB/s (65.5kB/s-3089kB/s), io=6136KiB (6283kB), run=1001-1001msec 00:34:40.737 WRITE: bw=9.99MiB/s (10.5MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:34:40.737 00:34:40.737 Disk stats (read/write): 00:34:40.737 nvme0n1: ios=61/512, merge=0/0, ticks=656/275, in_queue=931, util=85.87% 00:34:40.738 nvme0n2: ios=632/1024, merge=0/0, ticks=557/331, in_queue=888, util=88.32% 00:34:40.738 nvme0n3: ios=288/512, merge=0/0, ticks=582/321, in_queue=903, util=95.17% 00:34:40.738 nvme0n4: ios=338/512, merge=0/0, ticks=658/307, in_queue=965, util=97.01% 00:34:40.738 20:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:40.738 [global] 00:34:40.738 thread=1 00:34:40.738 invalidate=1 00:34:40.738 rw=randwrite 00:34:40.738 time_based=1 00:34:40.738 runtime=1 00:34:40.738 ioengine=libaio 00:34:40.738 direct=1 00:34:40.738 bs=4096 00:34:40.738 iodepth=1 00:34:40.738 norandommap=0 00:34:40.738 numjobs=1 00:34:40.738 00:34:40.738 verify_dump=1 00:34:40.738 verify_backlog=512 00:34:40.738 verify_state_save=0 00:34:40.738 do_verify=1 00:34:40.738 verify=crc32c-intel 00:34:40.738 [job0] 00:34:40.738 filename=/dev/nvme0n1 00:34:40.738 [job1] 00:34:40.738 filename=/dev/nvme0n2 00:34:40.738 [job2] 00:34:40.738 filename=/dev/nvme0n3 00:34:40.738 [job3] 00:34:40.738 filename=/dev/nvme0n4 00:34:40.738 Could not set queue depth (nvme0n1) 00:34:40.738 Could not set queue depth (nvme0n2) 00:34:40.738 Could not set queue depth (nvme0n3) 00:34:40.738 Could not set queue depth (nvme0n4) 00:34:40.998 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:40.998 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:40.998 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:40.998 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:40.998 fio-3.35 00:34:40.998 Starting 4 threads 00:34:42.385 00:34:42.385 job0: (groupid=0, jobs=1): err= 0: pid=3932382: Tue Nov 26 20:12:42 2024 00:34:42.385 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1007msec) 00:34:42.385 slat (nsec): min=3622, max=27660, avg=25948.40, stdev=5260.00 00:34:42.385 clat (usec): min=26896, max=41119, avg=40269.29, stdev=3149.10 00:34:42.385 lat (usec): min=26923, max=41146, avg=40295.24, stdev=3148.79 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[26870], 5.00th=[26870], 10.00th=[40633], 20.00th=[40633], 00:34:42.385 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:42.385 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:42.385 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:42.385 | 99.99th=[41157] 00:34:42.385 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:34:42.385 slat (nsec): min=3270, max=50237, avg=11723.69, stdev=7596.52 00:34:42.385 clat (usec): min=145, max=1027, avg=371.09, stdev=122.74 00:34:42.385 lat (usec): min=150, max=1061, avg=382.81, stdev=124.83 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[ 176], 5.00th=[ 212], 10.00th=[ 247], 20.00th=[ 277], 00:34:42.385 | 30.00th=[ 302], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 367], 00:34:42.385 | 70.00th=[ 412], 80.00th=[ 461], 90.00th=[ 553], 95.00th=[ 611], 00:34:42.385 | 99.00th=[ 742], 99.50th=[ 799], 99.90th=[ 1029], 99.95th=[ 1029], 00:34:42.385 | 99.99th=[ 1029] 00:34:42.385 bw ( KiB/s): min= 4096, max= 4096, per=42.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.385 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.385 lat (usec) : 250=10.53%, 500=71.43%, 750=13.53%, 1000=0.56% 00:34:42.385 lat (msec) : 2=0.19%, 50=3.76% 00:34:42.385 cpu : usr=0.50%, sys=0.40%, ctx=535, majf=0, minf=1 00:34:42.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.385 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.385 job1: (groupid=0, jobs=1): err= 0: pid=3932383: Tue Nov 26 20:12:42 2024 00:34:42.385 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:42.385 slat (nsec): min=26559, max=60963, avg=27609.34, stdev=3518.46 00:34:42.385 clat (usec): min=817, max=1439, avg=1113.41, stdev=82.68 00:34:42.385 lat (usec): min=845, max=1466, avg=1141.02, stdev=82.69 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[ 898], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1057], 00:34:42.385 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:34:42.385 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1237], 00:34:42.385 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1434], 99.95th=[ 1434], 00:34:42.385 | 99.99th=[ 1434] 00:34:42.385 write: IOPS=638, BW=2553KiB/s (2615kB/s)(2556KiB/1001msec); 0 zone resets 00:34:42.385 slat (nsec): min=9307, max=56147, avg=31669.56, stdev=8537.97 00:34:42.385 clat (usec): min=270, max=945, avg=598.80, stdev=112.52 00:34:42.385 lat (usec): min=281, max=980, avg=630.47, stdev=115.61 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 510], 00:34:42.385 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:34:42.385 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 766], 00:34:42.385 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 947], 99.95th=[ 947], 00:34:42.385 | 99.99th=[ 947] 00:34:42.385 bw ( KiB/s): min= 4096, max= 4096, per=42.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.385 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.385 lat (usec) : 500=10.34%, 750=41.62%, 1000=7.38% 00:34:42.385 lat (msec) : 2=40.66% 00:34:42.385 cpu : usr=2.00%, sys=5.10%, ctx=1152, majf=0, minf=1 00:34:42.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.385 issued rwts: total=512,639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.385 job2: (groupid=0, jobs=1): err= 0: pid=3932386: Tue Nov 26 20:12:42 2024 00:34:42.385 read: IOPS=16, BW=67.9KiB/s (69.5kB/s)(68.0KiB/1002msec) 00:34:42.385 slat (nsec): min=25261, max=25906, avg=25611.47, stdev=211.82 00:34:42.385 clat (usec): min=23951, max=42156, avg=40756.73, stdev=4341.81 00:34:42.385 lat (usec): min=23976, max=42182, avg=40782.34, stdev=4341.87 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[23987], 5.00th=[23987], 10.00th=[41157], 20.00th=[41157], 00:34:42.385 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:42.385 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:42.385 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:42.385 | 99.99th=[42206] 00:34:42.385 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:34:42.385 slat (nsec): min=9241, max=50916, avg=24741.88, stdev=10631.31 00:34:42.385 clat (usec): min=205, max=986, avg=571.94, stdev=141.96 00:34:42.385 lat (usec): min=214, max=1018, avg=596.68, stdev=145.42 00:34:42.385 clat percentiles (usec): 00:34:42.385 | 1.00th=[ 241], 5.00th=[ 326], 10.00th=[ 388], 20.00th=[ 465], 00:34:42.386 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:34:42.386 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 824], 00:34:42.386 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 988], 99.95th=[ 988], 00:34:42.386 | 99.99th=[ 988] 00:34:42.386 bw ( KiB/s): min= 4096, max= 4096, per=42.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.386 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.386 lat (usec) : 250=1.13%, 500=27.98%, 750=59.17%, 1000=8.51% 00:34:42.386 lat (msec) : 50=3.21% 00:34:42.386 cpu : usr=0.40%, sys=1.50%, ctx=529, majf=0, minf=1 00:34:42.386 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.386 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.386 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.386 job3: (groupid=0, jobs=1): err= 0: pid=3932387: Tue Nov 26 20:12:42 2024 00:34:42.386 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:42.386 slat (nsec): min=7808, max=45973, avg=27037.82, stdev=2145.34 00:34:42.386 clat (usec): min=575, max=3901, avg=985.87, stdev=160.33 00:34:42.386 lat (usec): min=607, max=3928, avg=1012.91, stdev=160.10 00:34:42.386 clat percentiles (usec): 00:34:42.386 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:34:42.386 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:34:42.386 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:34:42.386 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 3916], 99.95th=[ 3916], 00:34:42.386 | 99.99th=[ 3916] 00:34:42.386 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:34:42.386 slat (nsec): min=9054, max=52659, avg=31161.94, stdev=7683.86 00:34:42.386 clat (usec): min=254, max=1017, avg=580.93, stdev=126.25 00:34:42.386 lat (usec): min=265, max=1067, avg=612.09, stdev=128.64 00:34:42.386 clat percentiles (usec): 00:34:42.386 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 412], 20.00th=[ 469], 00:34:42.386 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 619], 00:34:42.386 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 775], 00:34:42.386 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:42.386 | 99.99th=[ 1020] 00:34:42.386 bw ( KiB/s): min= 4096, max= 4096, per=42.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.386 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.386 lat (usec) : 500=16.07%, 750=40.02%, 1000=27.22% 00:34:42.386 lat (msec) : 2=16.61%, 4=0.08% 00:34:42.386 cpu : usr=3.20%, sys=4.60%, ctx=1282, majf=0, minf=2 00:34:42.386 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.386 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.386 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.386 00:34:42.386 Run status group 0 (all jobs): 00:34:42.386 READ: bw=4214KiB/s (4316kB/s), 67.9KiB/s-2046KiB/s (69.5kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1007msec 00:34:42.386 WRITE: bw=9664KiB/s (9896kB/s), 2034KiB/s-3077KiB/s (2083kB/s-3151kB/s), io=9732KiB (9966kB), run=1001-1007msec 00:34:42.386 00:34:42.386 Disk stats (read/write): 00:34:42.386 nvme0n1: ios=67/512, merge=0/0, ticks=858/184, in_queue=1042, util=96.29% 00:34:42.386 nvme0n2: ios=464/512, merge=0/0, ticks=1409/236, in_queue=1645, util=96.53% 00:34:42.386 nvme0n3: ios=12/512, merge=0/0, ticks=501/279, in_queue=780, util=88.41% 00:34:42.386 nvme0n4: ios=511/512, merge=0/0, ticks=470/221, in_queue=691, util=89.54% 00:34:42.386 20:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:42.386 [global] 00:34:42.386 thread=1 00:34:42.386 invalidate=1 00:34:42.386 rw=write 00:34:42.386 time_based=1 00:34:42.386 runtime=1 00:34:42.386 ioengine=libaio 00:34:42.386 direct=1 00:34:42.386 bs=4096 00:34:42.386 iodepth=128 00:34:42.386 norandommap=0 00:34:42.386 numjobs=1 00:34:42.386 00:34:42.386 verify_dump=1 00:34:42.386 verify_backlog=512 00:34:42.386 verify_state_save=0 00:34:42.386 do_verify=1 00:34:42.386 verify=crc32c-intel 00:34:42.386 [job0] 00:34:42.386 filename=/dev/nvme0n1 00:34:42.386 [job1] 00:34:42.386 filename=/dev/nvme0n2 00:34:42.386 [job2] 00:34:42.386 filename=/dev/nvme0n3 00:34:42.386 [job3] 00:34:42.386 filename=/dev/nvme0n4 00:34:42.386 Could not set queue depth (nvme0n1) 00:34:42.386 Could not set queue depth (nvme0n2) 00:34:42.386 Could not set queue depth (nvme0n3) 00:34:42.386 Could not set queue depth (nvme0n4) 00:34:42.647 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.647 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.647 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.647 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:42.647 fio-3.35 00:34:42.647 Starting 4 threads 00:34:44.061 00:34:44.061 job0: (groupid=0, jobs=1): err= 0: pid=3932831: Tue Nov 26 20:12:44 2024 00:34:44.061 read: IOPS=6403, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1004msec) 00:34:44.061 slat (nsec): min=951, max=6444.5k, avg=50517.92, stdev=385261.41 00:34:44.061 clat (usec): min=1864, max=27175, avg=7483.95, stdev=4018.06 00:34:44.061 lat (usec): min=2079, max=27182, avg=7534.47, stdev=4023.90 00:34:44.061 clat percentiles (usec): 00:34:44.061 | 1.00th=[ 3490], 5.00th=[ 4555], 10.00th=[ 4752], 20.00th=[ 5145], 00:34:44.061 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 7046], 00:34:44.061 | 70.00th=[ 7767], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[12125], 00:34:44.061 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:34:44.061 | 99.99th=[27132] 00:34:44.061 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:34:44.061 slat (nsec): min=1651, max=59850k, avg=97111.80, stdev=1620876.62 00:34:44.061 clat (usec): min=1323, max=277957, avg=8363.43, stdev=16044.51 00:34:44.061 lat (usec): min=1331, max=277967, avg=8460.55, stdev=16391.93 00:34:44.061 clat percentiles (msec): 00:34:44.061 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:34:44.061 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:34:44.061 | 70.00th=[ 7], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 10], 00:34:44.061 | 99.00th=[ 56], 99.50th=[ 95], 99.90th=[ 247], 99.95th=[ 247], 00:34:44.061 | 99.99th=[ 279] 00:34:44.061 bw ( KiB/s): min=12288, max=40960, per=29.81%, avg=26624.00, stdev=20274.17, samples=2 00:34:44.061 iops : min= 3072, max=10240, avg=6656.00, stdev=5068.54, samples=2 00:34:44.061 lat (msec) : 2=0.44%, 4=6.63%, 10=86.56%, 20=2.49%, 50=2.43% 00:34:44.061 lat (msec) : 100=1.22%, 250=0.22%, 500=0.02% 00:34:44.061 cpu : usr=4.79%, sys=4.89%, ctx=461, majf=0, minf=2 00:34:44.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:44.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.061 issued rwts: total=6429,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.061 job1: (groupid=0, jobs=1): err= 0: pid=3932845: Tue Nov 26 20:12:44 2024 00:34:44.061 read: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1007msec) 00:34:44.061 slat (nsec): min=907, max=13896k, avg=70550.94, stdev=608561.84 00:34:44.061 clat (usec): min=1503, max=30580, avg=10832.36, stdev=4611.72 00:34:44.061 lat (usec): min=1512, max=30656, avg=10902.91, stdev=4642.89 00:34:44.061 clat percentiles (usec): 00:34:44.061 | 1.00th=[ 3163], 5.00th=[ 4817], 10.00th=[ 6063], 20.00th=[ 6652], 00:34:44.061 | 30.00th=[ 7635], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[11600], 00:34:44.061 | 70.00th=[12911], 80.00th=[15008], 90.00th=[16909], 95.00th=[20317], 00:34:44.061 | 99.00th=[22676], 99.50th=[23462], 99.90th=[27132], 99.95th=[30540], 00:34:44.061 | 99.99th=[30540] 00:34:44.061 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:34:44.061 slat (nsec): min=1571, max=12860k, avg=77622.02, stdev=596317.64 00:34:44.061 clat (usec): min=308, max=56356, avg=10478.27, stdev=8611.53 00:34:44.061 lat (usec): min=321, max=56371, avg=10555.90, stdev=8666.18 00:34:44.061 clat percentiles (usec): 00:34:44.061 | 1.00th=[ 955], 5.00th=[ 2008], 10.00th=[ 3589], 20.00th=[ 5145], 00:34:44.061 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 7832], 60.00th=[10028], 00:34:44.061 | 70.00th=[11994], 80.00th=[14353], 90.00th=[19006], 95.00th=[23462], 00:34:44.061 | 99.00th=[49021], 99.50th=[51643], 99.90th=[55837], 99.95th=[56361], 00:34:44.061 | 99.99th=[56361] 00:34:44.061 bw ( KiB/s): min=20368, max=28784, per=27.52%, avg=24576.00, stdev=5951.01, samples=2 00:34:44.061 iops : min= 5092, max= 7196, avg=6144.00, stdev=1487.75, samples=2 00:34:44.061 lat (usec) : 500=0.03%, 750=0.07%, 1000=0.46% 00:34:44.061 lat (msec) : 2=2.19%, 4=4.41%, 10=49.29%, 20=36.30%, 50=6.83% 00:34:44.061 lat (msec) : 100=0.42% 00:34:44.061 cpu : usr=4.97%, sys=6.06%, ctx=463, majf=0, minf=2 00:34:44.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:44.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.061 issued rwts: total=5857,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.061 job2: (groupid=0, jobs=1): err= 0: pid=3932867: Tue Nov 26 20:12:44 2024 00:34:44.061 read: IOPS=5205, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:34:44.061 slat (usec): min=2, max=14595, avg=79.95, stdev=729.47 00:34:44.061 clat (usec): min=1454, max=34015, avg=11785.78, stdev=4250.05 00:34:44.061 lat (usec): min=2342, max=34019, avg=11865.73, stdev=4300.52 00:34:44.061 clat percentiles (usec): 00:34:44.061 | 1.00th=[ 4228], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8455], 00:34:44.061 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[12256], 00:34:44.061 | 70.00th=[13698], 80.00th=[14877], 90.00th=[16909], 95.00th=[19792], 00:34:44.061 | 99.00th=[25297], 99.50th=[25297], 99.90th=[28443], 99.95th=[28443], 00:34:44.061 | 99.99th=[33817] 00:34:44.061 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:34:44.061 slat (nsec): min=1780, max=10905k, avg=81215.16, stdev=642964.03 00:34:44.061 clat (usec): min=498, max=57737, avg=11725.24, stdev=9424.95 00:34:44.061 lat (usec): min=531, max=57746, avg=11806.45, stdev=9486.33 00:34:44.061 clat percentiles (usec): 00:34:44.061 | 1.00th=[ 1745], 5.00th=[ 3261], 10.00th=[ 4752], 20.00th=[ 6259], 00:34:44.061 | 30.00th=[ 6849], 40.00th=[ 7767], 50.00th=[ 8717], 60.00th=[10421], 00:34:44.061 | 70.00th=[12125], 80.00th=[13566], 90.00th=[21890], 95.00th=[35390], 00:34:44.061 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54789], 99.95th=[57934], 00:34:44.061 | 99.99th=[57934] 00:34:44.062 bw ( KiB/s): min=21360, max=23688, per=25.22%, avg=22524.00, stdev=1646.14, samples=2 00:34:44.062 iops : min= 5340, max= 5922, avg=5631.00, stdev=411.54, samples=2 00:34:44.062 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:34:44.062 lat (msec) : 2=0.72%, 4=3.93%, 10=43.77%, 20=42.93%, 50=8.06% 00:34:44.062 lat (msec) : 100=0.54% 00:34:44.062 cpu : usr=4.97%, sys=5.76%, ctx=291, majf=0, minf=1 00:34:44.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:44.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.062 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.062 job3: (groupid=0, jobs=1): err= 0: pid=3932877: Tue Nov 26 20:12:44 2024 00:34:44.062 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:34:44.062 slat (nsec): min=924, max=24302k, avg=145870.62, stdev=1185529.37 00:34:44.062 clat (usec): min=3467, max=66935, avg=18055.49, stdev=13239.68 00:34:44.062 lat (usec): min=3469, max=66960, avg=18201.36, stdev=13356.80 00:34:44.062 clat percentiles (usec): 00:34:44.062 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7242], 00:34:44.062 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[12518], 60.00th=[18482], 00:34:44.062 | 70.00th=[23725], 80.00th=[30278], 90.00th=[38536], 95.00th=[44303], 00:34:44.062 | 99.00th=[58983], 99.50th=[61080], 99.90th=[61080], 99.95th=[63701], 00:34:44.062 | 99.99th=[66847] 00:34:44.062 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1003msec); 0 zone resets 00:34:44.062 slat (nsec): min=1615, max=25517k, avg=112350.65, stdev=918289.32 00:34:44.062 clat (usec): min=962, max=58925, avg=15359.02, stdev=10484.65 00:34:44.062 lat (usec): min=971, max=61978, avg=15471.37, stdev=10570.54 00:34:44.062 clat percentiles (usec): 00:34:44.062 | 1.00th=[ 2737], 5.00th=[ 5473], 10.00th=[ 6587], 20.00th=[ 6783], 00:34:44.062 | 30.00th=[ 7111], 40.00th=[ 8356], 50.00th=[11600], 60.00th=[15664], 00:34:44.062 | 70.00th=[19268], 80.00th=[23725], 90.00th=[29492], 95.00th=[39060], 00:34:44.062 | 99.00th=[47973], 99.50th=[49021], 99.90th=[51643], 99.95th=[57934], 00:34:44.062 | 99.99th=[58983] 00:34:44.062 bw ( KiB/s): min=11080, max=20480, per=17.67%, avg=15780.00, stdev=6646.80, samples=2 00:34:44.062 iops : min= 2770, max= 5120, avg=3945.00, stdev=1661.70, samples=2 00:34:44.062 lat (usec) : 1000=0.04% 00:34:44.062 lat (msec) : 2=0.21%, 4=1.44%, 10=43.21%, 20=22.86%, 50=30.92% 00:34:44.062 lat (msec) : 100=1.33% 00:34:44.062 cpu : usr=2.89%, sys=3.49%, ctx=276, majf=0, minf=1 00:34:44.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:44.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:44.062 issued rwts: total=3584,4072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:44.062 00:34:44.062 Run status group 0 (all jobs): 00:34:44.062 READ: bw=81.8MiB/s (85.8MB/s), 14.0MiB/s-25.0MiB/s (14.6MB/s-26.2MB/s), io=82.5MiB (86.5MB), run=1003-1008msec 00:34:44.062 WRITE: bw=87.2MiB/s (91.4MB/s), 15.9MiB/s-25.9MiB/s (16.6MB/s-27.2MB/s), io=87.9MiB (92.2MB), run=1003-1008msec 00:34:44.062 00:34:44.062 Disk stats (read/write): 00:34:44.062 nvme0n1: ios=4632/5120, merge=0/0, ticks=30138/27329, in_queue=57467, util=98.40% 00:34:44.062 nvme0n2: ios=5151/5622, merge=0/0, ticks=45471/45920, in_queue=91391, util=91.03% 00:34:44.062 nvme0n3: ios=4225/4608, merge=0/0, ticks=48736/53263, in_queue=101999, util=88.30% 00:34:44.062 nvme0n4: ios=3113/3347, merge=0/0, ticks=25552/21407, in_queue=46959, util=95.73% 00:34:44.062 20:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:44.062 [global] 00:34:44.062 thread=1 00:34:44.062 invalidate=1 00:34:44.062 rw=randwrite 00:34:44.062 time_based=1 00:34:44.062 runtime=1 00:34:44.062 ioengine=libaio 00:34:44.062 direct=1 00:34:44.062 bs=4096 00:34:44.062 iodepth=128 00:34:44.062 norandommap=0 00:34:44.062 numjobs=1 00:34:44.062 00:34:44.062 verify_dump=1 00:34:44.062 verify_backlog=512 00:34:44.062 verify_state_save=0 00:34:44.062 do_verify=1 00:34:44.062 verify=crc32c-intel 00:34:44.062 [job0] 00:34:44.062 filename=/dev/nvme0n1 00:34:44.062 [job1] 00:34:44.062 filename=/dev/nvme0n2 00:34:44.062 [job2] 00:34:44.062 filename=/dev/nvme0n3 00:34:44.062 [job3] 00:34:44.062 filename=/dev/nvme0n4 00:34:44.062 Could not set queue depth (nvme0n1) 00:34:44.062 Could not set queue depth (nvme0n2) 00:34:44.062 Could not set queue depth (nvme0n3) 00:34:44.062 Could not set queue depth (nvme0n4) 00:34:44.322 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.322 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.322 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.322 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.322 fio-3.35 00:34:44.322 Starting 4 threads 00:34:45.706 00:34:45.706 job0: (groupid=0, jobs=1): err= 0: pid=3933288: Tue Nov 26 20:12:46 2024 00:34:45.706 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:34:45.706 slat (nsec): min=905, max=9347.0k, avg=69928.96, stdev=495980.09 00:34:45.706 clat (usec): min=2709, max=21766, avg=9469.49, stdev=3161.20 00:34:45.706 lat (usec): min=2713, max=21771, avg=9539.41, stdev=3192.70 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 3490], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6849], 00:34:45.706 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:34:45.706 | 70.00th=[11207], 80.00th=[12387], 90.00th=[14222], 95.00th=[15270], 00:34:45.706 | 99.00th=[17171], 99.50th=[17171], 99.90th=[19530], 99.95th=[19530], 00:34:45.706 | 99.99th=[21890] 00:34:45.706 write: IOPS=7610, BW=29.7MiB/s (31.2MB/s)(29.8MiB/1002msec); 0 zone resets 00:34:45.706 slat (nsec): min=1514, max=6557.6k, avg=52202.38, stdev=360843.66 00:34:45.706 clat (usec): min=686, max=20877, avg=7768.56, stdev=3498.61 00:34:45.706 lat (usec): min=851, max=20886, avg=7820.77, stdev=3523.73 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 2114], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5211], 00:34:45.706 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 6849], 60.00th=[ 7504], 00:34:45.706 | 70.00th=[ 8291], 80.00th=[ 9896], 90.00th=[13698], 95.00th=[16057], 00:34:45.706 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19268], 99.95th=[19792], 00:34:45.706 | 99.99th=[20841] 00:34:45.706 bw ( KiB/s): min=29816, max=29816, per=36.64%, avg=29816.00, stdev= 0.00, samples=1 00:34:45.706 iops : min= 7454, max= 7454, avg=7454.00, stdev= 0.00, samples=1 00:34:45.706 lat (usec) : 750=0.01%, 1000=0.02% 00:34:45.706 lat (msec) : 2=0.47%, 4=3.41%, 10=67.61%, 20=28.44%, 50=0.03% 00:34:45.706 cpu : usr=4.60%, sys=8.89%, ctx=428, majf=0, minf=1 00:34:45.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:45.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:45.706 issued rwts: total=7168,7626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:45.706 job1: (groupid=0, jobs=1): err= 0: pid=3933300: Tue Nov 26 20:12:46 2024 00:34:45.706 read: IOPS=2319, BW=9276KiB/s (9499kB/s)(9304KiB/1003msec) 00:34:45.706 slat (nsec): min=987, max=53080k, avg=310987.07, stdev=3052426.66 00:34:45.706 clat (usec): min=542, max=196221, avg=34112.66, stdev=45541.66 00:34:45.706 lat (msec): min=2, max=196, avg=34.42, stdev=45.83 00:34:45.706 clat percentiles (msec): 00:34:45.706 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:34:45.706 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 22], 00:34:45.706 | 70.00th=[ 31], 80.00th=[ 35], 90.00th=[ 93], 95.00th=[ 192], 00:34:45.706 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 197], 00:34:45.706 | 99.99th=[ 197] 00:34:45.706 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:34:45.706 slat (nsec): min=1756, max=11630k, avg=104040.30, stdev=633202.86 00:34:45.706 clat (msec): min=5, max=196, avg=18.61, stdev=21.92 00:34:45.706 lat (msec): min=5, max=196, avg=18.72, stdev=21.94 00:34:45.706 clat percentiles (msec): 00:34:45.706 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:34:45.706 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 14], 00:34:45.706 | 70.00th=[ 18], 80.00th=[ 27], 90.00th=[ 31], 95.00th=[ 34], 00:34:45.706 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:34:45.706 | 99.99th=[ 197] 00:34:45.706 bw ( KiB/s): min= 6712, max=13768, per=12.59%, avg=10240.00, stdev=4989.35, samples=2 00:34:45.706 iops : min= 1678, max= 3442, avg=2560.00, stdev=1247.34, samples=2 00:34:45.706 lat (usec) : 750=0.02% 00:34:45.706 lat (msec) : 4=0.72%, 10=28.61%, 20=38.11%, 50=22.78%, 100=4.54% 00:34:45.706 lat (msec) : 250=5.22% 00:34:45.706 cpu : usr=1.50%, sys=2.20%, ctx=223, majf=0, minf=1 00:34:45.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:34:45.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:45.706 issued rwts: total=2326,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:45.706 job2: (groupid=0, jobs=1): err= 0: pid=3933318: Tue Nov 26 20:12:46 2024 00:34:45.706 read: IOPS=4739, BW=18.5MiB/s (19.4MB/s)(19.3MiB/1045msec) 00:34:45.706 slat (nsec): min=990, max=13234k, avg=97032.68, stdev=634704.12 00:34:45.706 clat (usec): min=5364, max=64458, avg=13678.41, stdev=10433.14 00:34:45.706 lat (usec): min=5373, max=64466, avg=13775.44, stdev=10482.24 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7504], 00:34:45.706 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 9110], 60.00th=[11338], 00:34:45.706 | 70.00th=[11731], 80.00th=[18220], 90.00th=[28443], 95.00th=[33424], 00:34:45.706 | 99.00th=[58983], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:34:45.706 | 99.99th=[64226] 00:34:45.706 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:34:45.706 slat (nsec): min=1668, max=14458k, avg=96302.66, stdev=672519.63 00:34:45.706 clat (usec): min=4340, max=45830, avg=12348.11, stdev=7750.76 00:34:45.706 lat (usec): min=4470, max=45864, avg=12444.41, stdev=7818.04 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 5473], 5.00th=[ 5997], 10.00th=[ 7046], 20.00th=[ 7504], 00:34:45.706 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9765], 00:34:45.706 | 70.00th=[12256], 80.00th=[17171], 90.00th=[26346], 95.00th=[30016], 00:34:45.706 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[40109], 00:34:45.706 | 99.99th=[45876] 00:34:45.706 bw ( KiB/s): min=15024, max=25936, per=25.17%, avg=20480.00, stdev=7715.95, samples=2 00:34:45.706 iops : min= 3756, max= 6484, avg=5120.00, stdev=1928.99, samples=2 00:34:45.706 lat (msec) : 10=58.81%, 20=23.63%, 50=16.32%, 100=1.24% 00:34:45.706 cpu : usr=3.35%, sys=5.75%, ctx=390, majf=0, minf=1 00:34:45.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:45.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:45.706 issued rwts: total=4953,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:45.706 job3: (groupid=0, jobs=1): err= 0: pid=3933324: Tue Nov 26 20:12:46 2024 00:34:45.706 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:34:45.706 slat (nsec): min=945, max=11411k, avg=86716.56, stdev=670741.49 00:34:45.706 clat (usec): min=3605, max=40050, avg=11831.48, stdev=5928.70 00:34:45.706 lat (usec): min=3614, max=40074, avg=11918.19, stdev=5981.54 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 4817], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7767], 00:34:45.706 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10814], 00:34:45.706 | 70.00th=[12125], 80.00th=[13566], 90.00th=[20579], 95.00th=[26608], 00:34:45.706 | 99.00th=[32900], 99.50th=[34866], 99.90th=[34866], 99.95th=[36963], 00:34:45.706 | 99.99th=[40109] 00:34:45.706 write: IOPS=5927, BW=23.2MiB/s (24.3MB/s)(23.2MiB/1004msec); 0 zone resets 00:34:45.706 slat (nsec): min=1546, max=10307k, avg=77341.62, stdev=474066.99 00:34:45.706 clat (usec): min=914, max=32799, avg=10214.08, stdev=3971.22 00:34:45.706 lat (usec): min=1202, max=32807, avg=10291.42, stdev=3996.73 00:34:45.706 clat percentiles (usec): 00:34:45.706 | 1.00th=[ 3752], 5.00th=[ 4752], 10.00th=[ 5800], 20.00th=[ 6521], 00:34:45.706 | 30.00th=[ 8094], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10552], 00:34:45.706 | 70.00th=[11076], 80.00th=[13042], 90.00th=[15270], 95.00th=[16581], 00:34:45.706 | 99.00th=[21365], 99.50th=[29754], 99.90th=[31851], 99.95th=[32900], 00:34:45.706 | 99.99th=[32900] 00:34:45.706 bw ( KiB/s): min=20480, max=26104, per=28.63%, avg=23292.00, stdev=3976.77, samples=2 00:34:45.706 iops : min= 5120, max= 6526, avg=5823.00, stdev=994.19, samples=2 00:34:45.706 lat (usec) : 1000=0.01% 00:34:45.706 lat (msec) : 2=0.02%, 4=0.87%, 10=49.86%, 20=43.05%, 50=6.19% 00:34:45.707 cpu : usr=3.99%, sys=5.98%, ctx=426, majf=0, minf=1 00:34:45.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:45.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:45.707 issued rwts: total=5632,5951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:45.707 00:34:45.707 Run status group 0 (all jobs): 00:34:45.707 READ: bw=75.1MiB/s (78.7MB/s), 9276KiB/s-27.9MiB/s (9499kB/s-29.3MB/s), io=78.4MiB (82.2MB), run=1002-1045msec 00:34:45.707 WRITE: bw=79.5MiB/s (83.3MB/s), 9.97MiB/s-29.7MiB/s (10.5MB/s-31.2MB/s), io=83.0MiB (87.1MB), run=1002-1045msec 00:34:45.707 00:34:45.707 Disk stats (read/write): 00:34:45.707 nvme0n1: ios=6053/6144, merge=0/0, ticks=40327/42150, in_queue=82477, util=92.08% 00:34:45.707 nvme0n2: ios=1876/2048, merge=0/0, ticks=19683/8001, in_queue=27684, util=96.74% 00:34:45.707 nvme0n3: ios=4200/4608, merge=0/0, ticks=16218/18399, in_queue=34617, util=100.00% 00:34:45.707 nvme0n4: ios=4626/4647, merge=0/0, ticks=38249/39228, in_queue=77477, util=89.97% 00:34:45.707 20:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:45.707 20:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3933450 00:34:45.707 20:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:45.707 20:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:45.707 [global] 00:34:45.707 thread=1 00:34:45.707 invalidate=1 00:34:45.707 rw=read 00:34:45.707 time_based=1 00:34:45.707 runtime=10 00:34:45.707 ioengine=libaio 00:34:45.707 direct=1 00:34:45.707 bs=4096 00:34:45.707 iodepth=1 00:34:45.707 norandommap=1 00:34:45.707 numjobs=1 00:34:45.707 00:34:45.707 [job0] 00:34:45.707 filename=/dev/nvme0n1 00:34:45.707 [job1] 00:34:45.707 filename=/dev/nvme0n2 00:34:45.707 [job2] 00:34:45.707 filename=/dev/nvme0n3 00:34:45.707 [job3] 00:34:45.707 filename=/dev/nvme0n4 00:34:45.707 Could not set queue depth (nvme0n1) 00:34:45.707 Could not set queue depth (nvme0n2) 00:34:45.707 Could not set queue depth (nvme0n3) 00:34:45.707 Could not set queue depth (nvme0n4) 00:34:45.967 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.967 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.967 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.967 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.967 fio-3.35 00:34:45.967 Starting 4 threads 00:34:48.508 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:48.767 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=303104, buflen=4096 00:34:48.767 fio: pid=3933784, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:48.768 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:49.027 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10526720, buflen=4096 00:34:49.027 fio: pid=3933778, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:49.028 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.028 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:49.288 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11300864, buflen=4096 00:34:49.289 fio: pid=3933743, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:49.289 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.289 20:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:49.289 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.289 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:49.289 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:34:49.289 fio: pid=3933760, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:49.289 00:34:49.289 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3933743: Tue Nov 26 20:12:50 2024 00:34:49.289 read: IOPS=933, BW=3733KiB/s (3823kB/s)(10.8MiB/2956msec) 00:34:49.289 slat (usec): min=6, max=36353, avg=62.54, stdev=1021.27 00:34:49.289 clat (usec): min=473, max=2007, avg=992.90, stdev=86.46 00:34:49.289 lat (usec): min=499, max=37468, avg=1055.46, stdev=1028.70 00:34:49.289 clat percentiles (usec): 00:34:49.289 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 947], 00:34:49.289 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:34:49.289 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:34:49.289 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1598], 99.95th=[ 1926], 00:34:49.289 | 99.99th=[ 2008] 00:34:49.289 bw ( KiB/s): min= 3864, max= 3952, per=55.76%, avg=3886.40, stdev=37.27, samples=5 00:34:49.289 iops : min= 966, max= 988, avg=971.60, stdev= 9.32, samples=5 00:34:49.289 lat (usec) : 500=0.04%, 750=0.91%, 1000=53.30% 00:34:49.289 lat (msec) : 2=45.69%, 4=0.04% 00:34:49.289 cpu : usr=1.76%, sys=3.62%, ctx=2764, majf=0, minf=1 00:34:49.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 issued rwts: total=2760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.289 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3933760: Tue Nov 26 20:12:50 2024 00:34:49.289 read: IOPS=24, BW=95.4KiB/s (97.7kB/s)(300KiB/3144msec) 00:34:49.289 slat (usec): min=24, max=13612, avg=284.53, stdev=1675.66 00:34:49.289 clat (usec): min=806, max=42103, avg=41331.17, stdev=4749.96 00:34:49.289 lat (usec): min=841, max=54993, avg=41611.80, stdev=5042.24 00:34:49.289 clat percentiles (usec): 00:34:49.289 | 1.00th=[ 807], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:49.289 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:49.289 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:49.289 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:49.289 | 99.99th=[42206] 00:34:49.289 bw ( KiB/s): min= 95, max= 96, per=1.36%, avg=95.83, stdev= 0.41, samples=6 00:34:49.289 iops : min= 23, max= 24, avg=23.83, stdev= 0.41, samples=6 00:34:49.289 lat (usec) : 1000=1.32% 00:34:49.289 lat (msec) : 50=97.37% 00:34:49.289 cpu : usr=0.00%, sys=0.10%, ctx=80, majf=0, minf=2 00:34:49.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.289 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3933778: Tue Nov 26 20:12:50 2024 00:34:49.289 read: IOPS=924, BW=3697KiB/s (3785kB/s)(10.0MiB/2781msec) 00:34:49.289 slat (usec): min=24, max=14690, avg=35.27, stdev=332.02 00:34:49.289 clat (usec): min=634, max=5138, avg=1029.74, stdev=125.73 00:34:49.289 lat (usec): min=671, max=15776, avg=1065.01, stdev=357.44 00:34:49.289 clat percentiles (usec): 00:34:49.289 | 1.00th=[ 791], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 955], 00:34:49.289 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1057], 00:34:49.289 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:49.289 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1352], 00:34:49.289 | 99.99th=[ 5145] 00:34:49.289 bw ( KiB/s): min= 3736, max= 3848, per=54.08%, avg=3769.60, stdev=46.78, samples=5 00:34:49.289 iops : min= 934, max= 962, avg=942.40, stdev=11.70, samples=5 00:34:49.289 lat (usec) : 750=0.39%, 1000=36.06% 00:34:49.289 lat (msec) : 2=63.48%, 10=0.04% 00:34:49.289 cpu : usr=1.01%, sys=2.84%, ctx=2574, majf=0, minf=2 00:34:49.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.289 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3933784: Tue Nov 26 20:12:50 2024 00:34:49.289 read: IOPS=28, BW=114KiB/s (117kB/s)(296KiB/2600msec) 00:34:49.289 slat (nsec): min=7264, max=41275, avg=27460.48, stdev=5162.96 00:34:49.289 clat (usec): min=572, max=42103, avg=34737.70, stdev=15752.43 00:34:49.289 lat (usec): min=598, max=42132, avg=34765.15, stdev=15754.11 00:34:49.289 clat percentiles (usec): 00:34:49.289 | 1.00th=[ 570], 5.00th=[ 766], 10.00th=[ 930], 20.00th=[41681], 00:34:49.289 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:49.289 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:49.289 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:49.289 | 99.99th=[42206] 00:34:49.289 bw ( KiB/s): min= 96, max= 191, per=1.65%, avg=115.00, stdev=42.49, samples=5 00:34:49.289 iops : min= 24, max= 47, avg=28.60, stdev=10.29, samples=5 00:34:49.289 lat (usec) : 750=4.00%, 1000=12.00% 00:34:49.289 lat (msec) : 2=1.33%, 50=81.33% 00:34:49.289 cpu : usr=0.00%, sys=0.15%, ctx=77, majf=0, minf=2 00:34:49.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.289 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:49.289 00:34:49.289 Run status group 0 (all jobs): 00:34:49.289 READ: bw=6969KiB/s (7137kB/s), 95.4KiB/s-3733KiB/s (97.7kB/s-3823kB/s), io=21.4MiB (22.4MB), run=2600-3144msec 00:34:49.289 00:34:49.289 Disk stats (read/write): 00:34:49.289 nvme0n1: ios=2668/0, merge=0/0, ticks=2486/0, in_queue=2486, util=91.59% 00:34:49.289 nvme0n2: ios=74/0, merge=0/0, ticks=3058/0, in_queue=3058, util=95.14% 00:34:49.289 nvme0n3: ios=2433/0, merge=0/0, ticks=2431/0, in_queue=2431, util=96.03% 00:34:49.289 nvme0n4: ios=109/0, merge=0/0, ticks=3432/0, in_queue=3432, util=99.78% 00:34:49.550 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.550 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:49.812 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.812 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:49.812 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:49.812 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:50.073 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:50.073 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:50.335 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:50.335 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3933450 00:34:50.335 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:50.335 20:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:50.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:50.335 nvmf hotplug test: fio failed as expected 00:34:50.335 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.601 rmmod nvme_tcp 00:34:50.601 rmmod nvme_fabrics 00:34:50.601 rmmod nvme_keyring 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3930278 ']' 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3930278 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3930278 ']' 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3930278 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3930278 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3930278' 00:34:50.601 killing process with pid 3930278 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3930278 00:34:50.601 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3930278 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.924 20:12:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.953 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.953 00:34:52.953 real 0m28.391s 00:34:52.953 user 2m25.921s 00:34:52.953 sys 0m12.193s 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:52.954 ************************************ 00:34:52.954 END TEST nvmf_fio_target 00:34:52.954 ************************************ 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:52.954 ************************************ 00:34:52.954 START TEST nvmf_bdevio 00:34:52.954 ************************************ 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:52.954 * Looking for test storage... 00:34:52.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:52.954 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.215 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.216 --rc genhtml_branch_coverage=1 00:34:53.216 --rc genhtml_function_coverage=1 00:34:53.216 --rc genhtml_legend=1 00:34:53.216 --rc geninfo_all_blocks=1 00:34:53.216 --rc geninfo_unexecuted_blocks=1 00:34:53.216 00:34:53.216 ' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.216 --rc genhtml_branch_coverage=1 00:34:53.216 --rc genhtml_function_coverage=1 00:34:53.216 --rc genhtml_legend=1 00:34:53.216 --rc geninfo_all_blocks=1 00:34:53.216 --rc geninfo_unexecuted_blocks=1 00:34:53.216 00:34:53.216 ' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.216 --rc genhtml_branch_coverage=1 00:34:53.216 --rc genhtml_function_coverage=1 00:34:53.216 --rc genhtml_legend=1 00:34:53.216 --rc geninfo_all_blocks=1 00:34:53.216 --rc geninfo_unexecuted_blocks=1 00:34:53.216 00:34:53.216 ' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.216 --rc genhtml_branch_coverage=1 00:34:53.216 --rc genhtml_function_coverage=1 00:34:53.216 --rc genhtml_legend=1 00:34:53.216 --rc geninfo_all_blocks=1 00:34:53.216 --rc geninfo_unexecuted_blocks=1 00:34:53.216 00:34:53.216 ' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.216 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.217 20:12:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:01.360 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:01.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:01.360 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:01.360 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.360 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:35:01.361 00:35:01.361 --- 10.0.0.2 ping statistics --- 00:35:01.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.361 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:35:01.361 00:35:01.361 --- 10.0.0.1 ping statistics --- 00:35:01.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.361 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3938792 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3938792 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3938792 ']' 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.361 20:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.361 [2024-11-26 20:13:01.476488] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.361 [2024-11-26 20:13:01.477622] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:35:01.361 [2024-11-26 20:13:01.477675] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.361 [2024-11-26 20:13:01.577331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:01.361 [2024-11-26 20:13:01.629615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.361 [2024-11-26 20:13:01.629667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.361 [2024-11-26 20:13:01.629676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.361 [2024-11-26 20:13:01.629683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.361 [2024-11-26 20:13:01.629689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.361 [2024-11-26 20:13:01.632090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:01.361 [2024-11-26 20:13:01.632230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:01.361 [2024-11-26 20:13:01.632385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.361 [2024-11-26 20:13:01.632384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:01.361 [2024-11-26 20:13:01.710447] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.361 [2024-11-26 20:13:01.711606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:01.361 [2024-11-26 20:13:01.711623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:01.361 [2024-11-26 20:13:01.712113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:01.361 [2024-11-26 20:13:01.712117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 [2024-11-26 20:13:02.337575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 Malloc0 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:01.622 [2024-11-26 20:13:02.425729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.622 { 00:35:01.622 "params": { 00:35:01.622 "name": "Nvme$subsystem", 00:35:01.622 "trtype": "$TEST_TRANSPORT", 00:35:01.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.622 "adrfam": "ipv4", 00:35:01.622 "trsvcid": "$NVMF_PORT", 00:35:01.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.622 "hdgst": ${hdgst:-false}, 00:35:01.622 "ddgst": ${ddgst:-false} 00:35:01.622 }, 00:35:01.622 "method": "bdev_nvme_attach_controller" 00:35:01.622 } 00:35:01.622 EOF 00:35:01.622 )") 00:35:01.622 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:01.883 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:01.883 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:01.883 20:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.883 "params": { 00:35:01.883 "name": "Nvme1", 00:35:01.883 "trtype": "tcp", 00:35:01.883 "traddr": "10.0.0.2", 00:35:01.883 "adrfam": "ipv4", 00:35:01.883 "trsvcid": "4420", 00:35:01.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.883 "hdgst": false, 00:35:01.883 "ddgst": false 00:35:01.883 }, 00:35:01.883 "method": "bdev_nvme_attach_controller" 00:35:01.883 }' 00:35:01.883 [2024-11-26 20:13:02.485544] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:35:01.883 [2024-11-26 20:13:02.485616] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939016 ] 00:35:01.883 [2024-11-26 20:13:02.581870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:01.883 [2024-11-26 20:13:02.637942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.883 [2024-11-26 20:13:02.638108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.883 [2024-11-26 20:13:02.638108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.143 I/O targets: 00:35:02.143 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:02.143 00:35:02.143 00:35:02.143 CUnit - A unit testing framework for C - Version 2.1-3 00:35:02.143 http://cunit.sourceforge.net/ 00:35:02.143 00:35:02.143 00:35:02.143 Suite: bdevio tests on: Nvme1n1 00:35:02.405 Test: blockdev write read block ...passed 00:35:02.405 Test: blockdev write zeroes read block ...passed 00:35:02.405 Test: blockdev write zeroes read no split ...passed 00:35:02.405 Test: blockdev write zeroes read split ...passed 00:35:02.405 Test: blockdev write zeroes read split partial ...passed 00:35:02.405 Test: blockdev reset ...[2024-11-26 20:13:03.087027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:02.405 [2024-11-26 20:13:03.087136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4b970 (9): Bad file descriptor 00:35:02.405 [2024-11-26 20:13:03.094326] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:02.405 passed 00:35:02.405 Test: blockdev write read 8 blocks ...passed 00:35:02.405 Test: blockdev write read size > 128k ...passed 00:35:02.405 Test: blockdev write read invalid size ...passed 00:35:02.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:02.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:02.405 Test: blockdev write read max offset ...passed 00:35:02.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:02.667 Test: blockdev writev readv 8 blocks ...passed 00:35:02.667 Test: blockdev writev readv 30 x 1block ...passed 00:35:02.667 Test: blockdev writev readv block ...passed 00:35:02.667 Test: blockdev writev readv size > 128k ...passed 00:35:02.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:02.667 Test: blockdev comparev and writev ...[2024-11-26 20:13:03.320673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.320725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.320743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.320752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.321386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.321407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.321421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.321429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.322049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.322064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.322078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.322086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.322699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.322713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:02.667 [2024-11-26 20:13:03.322727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:02.667 [2024-11-26 20:13:03.322735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:02.667 passed 00:35:02.667 Test: blockdev nvme passthru rw ...passed 00:35:02.668 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:13:03.407011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:02.668 [2024-11-26 20:13:03.407029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:02.668 [2024-11-26 20:13:03.407409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:02.668 [2024-11-26 20:13:03.407422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:02.668 [2024-11-26 20:13:03.407805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:02.668 [2024-11-26 20:13:03.407818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:02.668 [2024-11-26 20:13:03.408205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:02.668 [2024-11-26 20:13:03.408221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:02.668 passed 00:35:02.668 Test: blockdev nvme admin passthru ...passed 00:35:02.668 Test: blockdev copy ...passed 00:35:02.668 00:35:02.668 Run Summary: Type Total Ran Passed Failed Inactive 00:35:02.668 suites 1 1 n/a 0 0 00:35:02.668 tests 23 23 23 0 0 00:35:02.668 asserts 152 152 152 0 n/a 00:35:02.668 00:35:02.668 Elapsed time = 1.099 seconds 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.928 rmmod nvme_tcp 00:35:02.928 rmmod nvme_fabrics 00:35:02.928 rmmod nvme_keyring 00:35:02.928 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3938792 ']' 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3938792 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3938792 ']' 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3938792 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.929 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3938792 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3938792' 00:35:03.189 killing process with pid 3938792 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3938792 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3938792 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.189 20:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.733 00:35:05.733 real 0m12.359s 00:35:05.733 user 0m10.133s 00:35:05.733 sys 0m6.494s 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.733 ************************************ 00:35:05.733 END TEST nvmf_bdevio 00:35:05.733 ************************************ 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:05.733 00:35:05.733 real 5m1.027s 00:35:05.733 user 10m25.668s 00:35:05.733 sys 2m3.978s 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.733 20:13:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:05.733 ************************************ 00:35:05.733 END TEST nvmf_target_core_interrupt_mode 00:35:05.733 ************************************ 00:35:05.733 20:13:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:05.733 20:13:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:05.733 20:13:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.733 20:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.733 ************************************ 00:35:05.733 START TEST nvmf_interrupt 00:35:05.733 ************************************ 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:05.733 * Looking for test storage... 00:35:05.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.734 --rc geninfo_all_blocks=1 00:35:05.734 --rc geninfo_unexecuted_blocks=1 00:35:05.734 00:35:05.734 ' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:05.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.734 --rc genhtml_branch_coverage=1 00:35:05.734 --rc genhtml_function_coverage=1 00:35:05.734 --rc genhtml_legend=1 00:35:05.734 --rc geninfo_all_blocks=1 00:35:05.734 --rc geninfo_unexecuted_blocks=1 00:35:05.734 00:35:05.734 ' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.734 20:13:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:13.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:13.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.874 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:13.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:13.875 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:35:13.875 00:35:13.875 --- 10.0.0.2 ping statistics --- 00:35:13.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.875 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:35:13.875 00:35:13.875 --- 10.0.0.1 ping statistics --- 00:35:13.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.875 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3943373 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3943373 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3943373 ']' 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.875 20:13:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:13.875 [2024-11-26 20:13:13.956279] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:13.875 [2024-11-26 20:13:13.957252] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:35:13.875 [2024-11-26 20:13:13.957291] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.875 [2024-11-26 20:13:14.054378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:13.875 [2024-11-26 20:13:14.089928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.875 [2024-11-26 20:13:14.089962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.875 [2024-11-26 20:13:14.089970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.875 [2024-11-26 20:13:14.089976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.875 [2024-11-26 20:13:14.089982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.875 [2024-11-26 20:13:14.091139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.875 [2024-11-26 20:13:14.091142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.875 [2024-11-26 20:13:14.147349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:13.875 [2024-11-26 20:13:14.148106] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:13.875 [2024-11-26 20:13:14.148381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:14.136 5000+0 records in 00:35:14.136 5000+0 records out 00:35:14.136 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0187676 s, 546 MB/s 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 AIO0 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 [2024-11-26 20:13:14.848135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.136 [2024-11-26 20:13:14.892593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3943373 0 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 0 idle 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:14.136 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:14.137 20:13:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943373 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.27 reactor_0' 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943373 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:00.27 reactor_0 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3943373 1 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 1 idle 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:14.398 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943395 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943395 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3943735 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3943373 0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3943373 0 busy 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943373 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.35 reactor_0' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943373 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.35 reactor_0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=46.7 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=46 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3943373 1 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3943373 1 busy 00:35:14.658 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943395 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1' 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943395 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:14.919 20:13:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3943735 00:35:24.922 Initializing NVMe Controllers 00:35:24.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:24.922 Controller IO queue size 256, less than required. 00:35:24.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:24.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:24.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:24.922 Initialization complete. Launching workers. 00:35:24.922 ======================================================== 00:35:24.922 Latency(us) 00:35:24.922 Device Information : IOPS MiB/s Average min max 00:35:24.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19917.70 77.80 12856.95 4189.60 31868.77 00:35:24.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19987.80 78.08 12808.67 8550.33 28170.93 00:35:24.922 ======================================================== 00:35:24.922 Total : 39905.50 155.88 12832.77 4189.60 31868.77 00:35:24.922 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3943373 0 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 0 idle 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:24.922 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943373 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:20.27 reactor_0' 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943373 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:20.27 reactor_0 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3943373 1 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 1 idle 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:24.923 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:25.183 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943395 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943395 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:25.184 20:13:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:25.754 20:13:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:25.754 20:13:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:25.754 20:13:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:25.754 20:13:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:25.754 20:13:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3943373 0 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 0 idle 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:27.665 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943373 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.59 reactor_0' 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943373 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.59 reactor_0 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3943373 1 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3943373 1 idle 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3943373 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3943373 -w 256 00:35:27.926 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3943395 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.11 reactor_1' 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3943395 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.11 reactor_1 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.187 20:13:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:28.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.448 rmmod nvme_tcp 00:35:28.448 rmmod nvme_fabrics 00:35:28.448 rmmod nvme_keyring 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3943373 ']' 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3943373 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3943373 ']' 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3943373 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3943373 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3943373' 00:35:28.448 killing process with pid 3943373 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3943373 00:35:28.448 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3943373 00:35:28.707 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:28.707 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:28.707 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:28.707 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:28.707 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:28.708 20:13:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.618 20:13:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:30.618 00:35:30.618 real 0m25.279s 00:35:30.618 user 0m40.487s 00:35:30.618 sys 0m9.451s 00:35:30.618 20:13:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.618 20:13:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:30.618 ************************************ 00:35:30.618 END TEST nvmf_interrupt 00:35:30.618 ************************************ 00:35:30.879 00:35:30.879 real 30m18.286s 00:35:30.879 user 62m16.357s 00:35:30.879 sys 10m20.447s 00:35:30.879 20:13:31 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.879 20:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.879 ************************************ 00:35:30.879 END TEST nvmf_tcp 00:35:30.879 ************************************ 00:35:30.879 20:13:31 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:30.879 20:13:31 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:30.879 20:13:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:30.879 20:13:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.879 20:13:31 -- common/autotest_common.sh@10 -- # set +x 00:35:30.879 ************************************ 00:35:30.879 START TEST spdkcli_nvmf_tcp 00:35:30.879 ************************************ 00:35:30.879 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:30.879 * Looking for test storage... 00:35:30.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:30.879 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:30.879 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:30.879 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:31.140 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:31.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.141 --rc genhtml_branch_coverage=1 00:35:31.141 --rc genhtml_function_coverage=1 00:35:31.141 --rc genhtml_legend=1 00:35:31.141 --rc geninfo_all_blocks=1 00:35:31.141 --rc geninfo_unexecuted_blocks=1 00:35:31.141 00:35:31.141 ' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:31.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.141 --rc genhtml_branch_coverage=1 00:35:31.141 --rc genhtml_function_coverage=1 00:35:31.141 --rc genhtml_legend=1 00:35:31.141 --rc geninfo_all_blocks=1 00:35:31.141 --rc geninfo_unexecuted_blocks=1 00:35:31.141 00:35:31.141 ' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:31.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.141 --rc genhtml_branch_coverage=1 00:35:31.141 --rc genhtml_function_coverage=1 00:35:31.141 --rc genhtml_legend=1 00:35:31.141 --rc geninfo_all_blocks=1 00:35:31.141 --rc geninfo_unexecuted_blocks=1 00:35:31.141 00:35:31.141 ' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:31.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.141 --rc genhtml_branch_coverage=1 00:35:31.141 --rc genhtml_function_coverage=1 00:35:31.141 --rc genhtml_legend=1 00:35:31.141 --rc geninfo_all_blocks=1 00:35:31.141 --rc geninfo_unexecuted_blocks=1 00:35:31.141 00:35:31.141 ' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3946917 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3946917 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3946917 ']' 00:35:31.141 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.142 20:13:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:31.142 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.142 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.142 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.142 20:13:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.142 [2024-11-26 20:13:31.846853] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:35:31.142 [2024-11-26 20:13:31.846926] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946917 ] 00:35:31.142 [2024-11-26 20:13:31.938588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:31.403 [2024-11-26 20:13:31.993720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.403 [2024-11-26 20:13:31.993727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.975 20:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:31.975 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:31.975 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:31.975 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:31.975 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:31.975 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:31.975 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:31.975 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:31.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:31.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:31.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:31.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:31.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:31.976 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:31.976 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:31.976 ' 00:35:35.279 [2024-11-26 20:13:35.458479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.220 [2024-11-26 20:13:36.818609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:38.762 [2024-11-26 20:13:39.349616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:41.301 [2024-11-26 20:13:41.576094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:42.683 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:42.683 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:42.684 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:42.684 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:42.684 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:42.684 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:42.684 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:42.684 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:42.684 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:42.684 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:42.684 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:42.684 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:42.684 20:13:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.255 20:13:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:43.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:43.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:43.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:43.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:43.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:43.255 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:43.255 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:43.255 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:43.255 ' 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:49.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:49.838 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:49.838 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:49.838 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:49.838 20:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:49.838 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.838 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3946917 ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3946917' 00:35:49.839 killing process with pid 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3946917 ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3946917 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3946917 ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3946917 00:35:49.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3946917) - No such process 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3946917 is not found' 00:35:49.839 Process with pid 3946917 is not found 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:49.839 00:35:49.839 real 0m18.227s 00:35:49.839 user 0m40.456s 00:35:49.839 sys 0m0.935s 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.839 20:13:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.839 ************************************ 00:35:49.839 END TEST spdkcli_nvmf_tcp 00:35:49.839 ************************************ 00:35:49.839 20:13:49 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:49.839 20:13:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.839 20:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.839 20:13:49 -- common/autotest_common.sh@10 -- # set +x 00:35:49.839 ************************************ 00:35:49.839 START TEST nvmf_identify_passthru 00:35:49.839 ************************************ 00:35:49.839 20:13:49 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:49.839 * Looking for test storage... 00:35:49.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:49.839 20:13:49 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:49.839 20:13:49 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:49.839 20:13:49 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.839 20:13:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:49.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.839 --rc genhtml_branch_coverage=1 00:35:49.839 --rc genhtml_function_coverage=1 00:35:49.839 --rc genhtml_legend=1 00:35:49.839 --rc geninfo_all_blocks=1 00:35:49.839 --rc geninfo_unexecuted_blocks=1 00:35:49.839 00:35:49.839 ' 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:49.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.839 --rc genhtml_branch_coverage=1 00:35:49.839 --rc genhtml_function_coverage=1 00:35:49.839 --rc genhtml_legend=1 00:35:49.839 --rc geninfo_all_blocks=1 00:35:49.839 --rc geninfo_unexecuted_blocks=1 00:35:49.839 00:35:49.839 ' 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:49.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.839 --rc genhtml_branch_coverage=1 00:35:49.839 --rc genhtml_function_coverage=1 00:35:49.839 --rc genhtml_legend=1 00:35:49.839 --rc geninfo_all_blocks=1 00:35:49.839 --rc geninfo_unexecuted_blocks=1 00:35:49.839 00:35:49.839 ' 00:35:49.839 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:49.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.839 --rc genhtml_branch_coverage=1 00:35:49.839 --rc genhtml_function_coverage=1 00:35:49.839 --rc genhtml_legend=1 00:35:49.839 --rc geninfo_all_blocks=1 00:35:49.839 --rc geninfo_unexecuted_blocks=1 00:35:49.839 00:35:49.839 ' 00:35:49.839 20:13:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.840 20:13:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:49.840 20:13:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.840 20:13:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.840 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:49.840 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.840 20:13:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.840 20:13:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:57.981 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:57.981 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:57.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.981 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:57.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:57.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:35:57.982 00:35:57.982 --- 10.0.0.2 ping statistics --- 00:35:57.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.982 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:57.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:35:57.982 00:35:57.982 --- 10.0.0.1 ping statistics --- 00:35:57.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.982 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:57.982 20:13:57 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:57.982 20:13:57 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:57.982 20:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:57.982 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:57.982 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:57.982 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.243 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.243 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3954333 00:35:58.243 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:58.243 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:58.243 20:13:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3954333 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3954333 ']' 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.243 20:13:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.243 [2024-11-26 20:13:58.866284] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:35:58.243 [2024-11-26 20:13:58.866352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.243 [2024-11-26 20:13:58.963375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:58.243 [2024-11-26 20:13:59.017084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.243 [2024-11-26 20:13:59.017141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.243 [2024-11-26 20:13:59.017150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.243 [2024-11-26 20:13:59.017157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.243 [2024-11-26 20:13:59.017177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.243 [2024-11-26 20:13:59.019210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.243 [2024-11-26 20:13:59.019321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:58.243 [2024-11-26 20:13:59.019617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:58.243 [2024-11-26 20:13:59.019619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:59.187 20:13:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.187 INFO: Log level set to 20 00:35:59.187 INFO: Requests: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "method": "nvmf_set_config", 00:35:59.187 "id": 1, 00:35:59.187 "params": { 00:35:59.187 "admin_cmd_passthru": { 00:35:59.187 "identify_ctrlr": true 00:35:59.187 } 00:35:59.187 } 00:35:59.187 } 00:35:59.187 00:35:59.187 INFO: response: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "id": 1, 00:35:59.187 "result": true 00:35:59.187 } 00:35:59.187 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.187 20:13:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.187 INFO: Setting log level to 20 00:35:59.187 INFO: Setting log level to 20 00:35:59.187 INFO: Log level set to 20 00:35:59.187 INFO: Log level set to 20 00:35:59.187 INFO: Requests: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "method": "framework_start_init", 00:35:59.187 "id": 1 00:35:59.187 } 00:35:59.187 00:35:59.187 INFO: Requests: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "method": "framework_start_init", 00:35:59.187 "id": 1 00:35:59.187 } 00:35:59.187 00:35:59.187 [2024-11-26 20:13:59.791855] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:59.187 INFO: response: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "id": 1, 00:35:59.187 "result": true 00:35:59.187 } 00:35:59.187 00:35:59.187 INFO: response: 00:35:59.187 { 00:35:59.187 "jsonrpc": "2.0", 00:35:59.187 "id": 1, 00:35:59.187 "result": true 00:35:59.187 } 00:35:59.187 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.187 20:13:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.187 INFO: Setting log level to 40 00:35:59.187 INFO: Setting log level to 40 00:35:59.187 INFO: Setting log level to 40 00:35:59.187 [2024-11-26 20:13:59.805446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.187 20:13:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.187 20:13:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.187 20:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 Nvme0n1 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 [2024-11-26 20:14:00.209054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 [ 00:35:59.448 { 00:35:59.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:59.448 "subtype": "Discovery", 00:35:59.448 "listen_addresses": [], 00:35:59.448 "allow_any_host": true, 00:35:59.448 "hosts": [] 00:35:59.448 }, 00:35:59.448 { 00:35:59.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.448 "subtype": "NVMe", 00:35:59.448 "listen_addresses": [ 00:35:59.448 { 00:35:59.448 "trtype": "TCP", 00:35:59.448 "adrfam": "IPv4", 00:35:59.448 "traddr": "10.0.0.2", 00:35:59.448 "trsvcid": "4420" 00:35:59.448 } 00:35:59.448 ], 00:35:59.448 "allow_any_host": true, 00:35:59.448 "hosts": [], 00:35:59.448 "serial_number": "SPDK00000000000001", 00:35:59.448 "model_number": "SPDK bdev Controller", 00:35:59.448 "max_namespaces": 1, 00:35:59.448 "min_cntlid": 1, 00:35:59.448 "max_cntlid": 65519, 00:35:59.448 "namespaces": [ 00:35:59.448 { 00:35:59.448 "nsid": 1, 00:35:59.448 "bdev_name": "Nvme0n1", 00:35:59.448 "name": "Nvme0n1", 00:35:59.448 "nguid": "36344730526054870025384500000044", 00:35:59.448 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:59.448 } 00:35:59.448 ] 00:35:59.448 } 00:35:59.448 ] 00:35:59.448 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:59.448 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:59.708 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:59.708 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:59.708 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:59.708 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:59.968 20:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.968 rmmod nvme_tcp 00:35:59.968 rmmod nvme_fabrics 00:35:59.968 rmmod nvme_keyring 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3954333 ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3954333 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3954333 ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3954333 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.968 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954333 00:36:00.230 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.230 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.230 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954333' 00:36:00.230 killing process with pid 3954333 00:36:00.230 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3954333 00:36:00.230 20:14:00 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3954333 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.491 20:14:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.491 20:14:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:00.491 20:14:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.403 20:14:03 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.403 00:36:02.403 real 0m13.311s 00:36:02.403 user 0m10.475s 00:36:02.403 sys 0m6.825s 00:36:02.403 20:14:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.403 20:14:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.403 ************************************ 00:36:02.403 END TEST nvmf_identify_passthru 00:36:02.403 ************************************ 00:36:02.403 20:14:03 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:02.403 20:14:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:02.403 20:14:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.403 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:36:02.751 ************************************ 00:36:02.751 START TEST nvmf_dif 00:36:02.751 ************************************ 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:02.751 * Looking for test storage... 00:36:02.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.751 20:14:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 20:14:03 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:02.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.751 --rc genhtml_branch_coverage=1 00:36:02.751 --rc genhtml_function_coverage=1 00:36:02.751 --rc genhtml_legend=1 00:36:02.751 --rc geninfo_all_blocks=1 00:36:02.751 --rc geninfo_unexecuted_blocks=1 00:36:02.751 00:36:02.751 ' 00:36:02.751 20:14:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.751 20:14:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.752 20:14:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.752 20:14:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.752 20:14:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.752 20:14:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.752 20:14:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.752 20:14:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.752 20:14:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.752 20:14:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:02.752 20:14:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:02.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.752 20:14:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:02.752 20:14:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:02.752 20:14:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:02.752 20:14:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:02.752 20:14:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.752 20:14:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.752 20:14:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:02.752 20:14:03 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:02.752 20:14:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:10.922 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:10.922 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.922 20:14:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:10.923 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:10.923 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:10.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:36:10.923 00:36:10.923 --- 10.0.0.2 ping statistics --- 00:36:10.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.923 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:10.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:36:10.923 00:36:10.923 --- 10.0.0.1 ping statistics --- 00:36:10.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.923 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:10.923 20:14:10 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.226 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:14.226 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:14.226 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:14.227 20:14:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:14.227 20:14:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3960512 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3960512 00:36:14.227 20:14:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3960512 ']' 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.227 20:14:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.227 [2024-11-26 20:14:14.895654] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:36:14.227 [2024-11-26 20:14:14.895716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.227 [2024-11-26 20:14:14.994597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.488 [2024-11-26 20:14:15.045561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.488 [2024-11-26 20:14:15.045618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.488 [2024-11-26 20:14:15.045627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.488 [2024-11-26 20:14:15.045635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.488 [2024-11-26 20:14:15.045641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.488 [2024-11-26 20:14:15.046476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:15.059 20:14:15 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 20:14:15 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.059 20:14:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:15.059 20:14:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 [2024-11-26 20:14:15.773937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.059 20:14:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 ************************************ 00:36:15.059 START TEST fio_dif_1_default 00:36:15.059 ************************************ 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 bdev_null0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:15.059 [2024-11-26 20:14:15.866420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.059 { 00:36:15.059 "params": { 00:36:15.059 "name": "Nvme$subsystem", 00:36:15.059 "trtype": "$TEST_TRANSPORT", 00:36:15.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.059 "adrfam": "ipv4", 00:36:15.059 "trsvcid": "$NVMF_PORT", 00:36:15.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.059 "hdgst": ${hdgst:-false}, 00:36:15.059 "ddgst": ${ddgst:-false} 00:36:15.059 }, 00:36:15.059 "method": "bdev_nvme_attach_controller" 00:36:15.059 } 00:36:15.059 EOF 00:36:15.059 )") 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.059 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:15.321 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:15.321 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.321 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.322 "params": { 00:36:15.322 "name": "Nvme0", 00:36:15.322 "trtype": "tcp", 00:36:15.322 "traddr": "10.0.0.2", 00:36:15.322 "adrfam": "ipv4", 00:36:15.322 "trsvcid": "4420", 00:36:15.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.322 "hdgst": false, 00:36:15.322 "ddgst": false 00:36:15.322 }, 00:36:15.322 "method": "bdev_nvme_attach_controller" 00:36:15.322 }' 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:15.322 20:14:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.583 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:15.583 fio-3.35 00:36:15.583 Starting 1 thread 00:36:27.814 00:36:27.814 filename0: (groupid=0, jobs=1): err= 0: pid=3961046: Tue Nov 26 20:14:26 2024 00:36:27.814 read: IOPS=98, BW=392KiB/s (402kB/s)(3936KiB/10035msec) 00:36:27.814 slat (nsec): min=5531, max=36257, avg=6401.24, stdev=1795.22 00:36:27.815 clat (usec): min=857, max=43519, avg=40773.26, stdev=3624.55 00:36:27.815 lat (usec): min=863, max=43555, avg=40779.66, stdev=3624.59 00:36:27.815 clat percentiles (usec): 00:36:27.815 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:27.815 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:27.815 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:27.815 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:36:27.815 | 99.99th=[43779] 00:36:27.815 bw ( KiB/s): min= 352, max= 416, per=99.69%, avg=392.00, stdev=17.60, samples=20 00:36:27.815 iops : min= 88, max= 104, avg=98.00, stdev= 4.40, samples=20 00:36:27.815 lat (usec) : 1000=0.61% 00:36:27.815 lat (msec) : 2=0.20%, 50=99.19% 00:36:27.815 cpu : usr=93.15%, sys=6.60%, ctx=24, majf=0, minf=222 00:36:27.815 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.815 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.815 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:27.815 00:36:27.815 Run status group 0 (all jobs): 00:36:27.815 READ: bw=392KiB/s (402kB/s), 392KiB/s-392KiB/s (402kB/s-402kB/s), io=3936KiB (4030kB), run=10035-10035msec 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 00:36:27.815 real 0m11.363s 00:36:27.815 user 0m18.465s 00:36:27.815 sys 0m1.141s 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 ************************************ 00:36:27.815 END TEST fio_dif_1_default 00:36:27.815 ************************************ 00:36:27.815 20:14:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:27.815 20:14:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.815 20:14:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 ************************************ 00:36:27.815 START TEST fio_dif_1_multi_subsystems 00:36:27.815 ************************************ 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 bdev_null0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 [2024-11-26 20:14:27.311774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 bdev_null1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.815 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:27.816 { 00:36:27.816 "params": { 00:36:27.816 "name": "Nvme$subsystem", 00:36:27.816 "trtype": "$TEST_TRANSPORT", 00:36:27.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.816 "adrfam": "ipv4", 00:36:27.816 "trsvcid": "$NVMF_PORT", 00:36:27.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.816 "hdgst": ${hdgst:-false}, 00:36:27.816 "ddgst": ${ddgst:-false} 00:36:27.816 }, 00:36:27.816 "method": "bdev_nvme_attach_controller" 00:36:27.816 } 00:36:27.816 EOF 00:36:27.816 )") 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:27.816 { 00:36:27.816 "params": { 00:36:27.816 "name": "Nvme$subsystem", 00:36:27.816 "trtype": "$TEST_TRANSPORT", 00:36:27.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.816 "adrfam": "ipv4", 00:36:27.816 "trsvcid": "$NVMF_PORT", 00:36:27.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.816 "hdgst": ${hdgst:-false}, 00:36:27.816 "ddgst": ${ddgst:-false} 00:36:27.816 }, 00:36:27.816 "method": "bdev_nvme_attach_controller" 00:36:27.816 } 00:36:27.816 EOF 00:36:27.816 )") 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:27.816 "params": { 00:36:27.816 "name": "Nvme0", 00:36:27.816 "trtype": "tcp", 00:36:27.816 "traddr": "10.0.0.2", 00:36:27.816 "adrfam": "ipv4", 00:36:27.816 "trsvcid": "4420", 00:36:27.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.816 "hdgst": false, 00:36:27.816 "ddgst": false 00:36:27.816 }, 00:36:27.816 "method": "bdev_nvme_attach_controller" 00:36:27.816 },{ 00:36:27.816 "params": { 00:36:27.816 "name": "Nvme1", 00:36:27.816 "trtype": "tcp", 00:36:27.816 "traddr": "10.0.0.2", 00:36:27.816 "adrfam": "ipv4", 00:36:27.816 "trsvcid": "4420", 00:36:27.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:27.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:27.816 "hdgst": false, 00:36:27.816 "ddgst": false 00:36:27.816 }, 00:36:27.816 "method": "bdev_nvme_attach_controller" 00:36:27.816 }' 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:27.816 20:14:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.816 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:27.816 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:27.816 fio-3.35 00:36:27.816 Starting 2 threads 00:36:37.841 00:36:37.841 filename0: (groupid=0, jobs=1): err= 0: pid=3963315: Tue Nov 26 20:14:38 2024 00:36:37.841 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:36:37.841 slat (nsec): min=5532, max=35333, avg=5880.40, stdev=1243.89 00:36:37.841 clat (usec): min=586, max=43205, avg=21081.36, stdev=20188.54 00:36:37.841 lat (usec): min=592, max=43237, avg=21087.24, stdev=20188.40 00:36:37.841 clat percentiles (usec): 00:36:37.841 | 1.00th=[ 644], 5.00th=[ 693], 10.00th=[ 783], 20.00th=[ 816], 00:36:37.841 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[40633], 60.00th=[41157], 00:36:37.841 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:37.841 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:37.841 | 99.99th=[43254] 00:36:37.841 bw ( KiB/s): min= 672, max= 768, per=66.12%, avg=759.58, stdev=23.47, samples=19 00:36:37.841 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:36:37.841 lat (usec) : 750=8.02%, 1000=41.14% 00:36:37.841 lat (msec) : 2=0.63%, 50=50.21% 00:36:37.841 cpu : usr=95.42%, sys=4.34%, ctx=17, majf=0, minf=164 00:36:37.841 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.841 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.841 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:37.841 filename1: (groupid=0, jobs=1): err= 0: pid=3963316: Tue Nov 26 20:14:38 2024 00:36:37.841 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:36:37.841 slat (nsec): min=5534, max=30525, avg=6687.80, stdev=1816.87 00:36:37.841 clat (usec): min=40806, max=42211, avg=40994.95, stdev=127.50 00:36:37.841 lat (usec): min=40812, max=42241, avg=41001.64, stdev=127.78 00:36:37.841 clat percentiles (usec): 00:36:37.841 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:37.841 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:37.841 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:37.841 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:37.841 | 99.99th=[42206] 00:36:37.841 bw ( KiB/s): min= 384, max= 416, per=33.80%, avg=388.80, stdev=11.72, samples=20 00:36:37.841 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:37.841 lat (msec) : 50=100.00% 00:36:37.841 cpu : usr=96.00%, sys=3.77%, ctx=13, majf=0, minf=120 00:36:37.841 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.841 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.841 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:37.841 00:36:37.841 Run status group 0 (all jobs): 00:36:37.841 READ: bw=1148KiB/s (1175kB/s), 390KiB/s-758KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10001-10008msec 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.102 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.103 00:36:38.103 real 0m11.552s 00:36:38.103 user 0m34.661s 00:36:38.103 sys 0m1.168s 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.103 20:14:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:38.103 ************************************ 00:36:38.103 END TEST fio_dif_1_multi_subsystems 00:36:38.103 ************************************ 00:36:38.103 20:14:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:38.103 20:14:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.103 20:14:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.103 20:14:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.103 ************************************ 00:36:38.103 START TEST fio_dif_rand_params 00:36:38.103 ************************************ 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.103 bdev_null0 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.103 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:38.363 [2024-11-26 20:14:38.947091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:38.363 { 00:36:38.363 "params": { 00:36:38.363 "name": "Nvme$subsystem", 00:36:38.363 "trtype": "$TEST_TRANSPORT", 00:36:38.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:38.363 "adrfam": "ipv4", 00:36:38.363 "trsvcid": "$NVMF_PORT", 00:36:38.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:38.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:38.363 "hdgst": ${hdgst:-false}, 00:36:38.363 "ddgst": ${ddgst:-false} 00:36:38.363 }, 00:36:38.363 "method": "bdev_nvme_attach_controller" 00:36:38.363 } 00:36:38.363 EOF 00:36:38.363 )") 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:38.363 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:38.364 "params": { 00:36:38.364 "name": "Nvme0", 00:36:38.364 "trtype": "tcp", 00:36:38.364 "traddr": "10.0.0.2", 00:36:38.364 "adrfam": "ipv4", 00:36:38.364 "trsvcid": "4420", 00:36:38.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.364 "hdgst": false, 00:36:38.364 "ddgst": false 00:36:38.364 }, 00:36:38.364 "method": "bdev_nvme_attach_controller" 00:36:38.364 }' 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:38.364 20:14:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:38.364 20:14:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:38.364 20:14:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:38.364 20:14:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:38.364 20:14:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:38.623 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:38.623 ... 00:36:38.623 fio-3.35 00:36:38.623 Starting 3 threads 00:36:45.209 00:36:45.209 filename0: (groupid=0, jobs=1): err= 0: pid=3965771: Tue Nov 26 20:14:44 2024 00:36:45.209 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(175MiB/5046msec) 00:36:45.209 slat (nsec): min=5570, max=31737, avg=8911.54, stdev=2120.21 00:36:45.209 clat (usec): min=4233, max=91409, avg=10770.54, stdev=8696.49 00:36:45.209 lat (usec): min=4242, max=91415, avg=10779.45, stdev=8696.26 00:36:45.209 clat percentiles (usec): 00:36:45.209 | 1.00th=[ 4686], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7767], 00:36:45.209 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:36:45.209 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12387], 00:36:45.209 | 99.00th=[49021], 99.50th=[50070], 99.90th=[88605], 99.95th=[91751], 00:36:45.209 | 99.99th=[91751] 00:36:45.209 bw ( KiB/s): min=25344, max=42496, per=31.63%, avg=35788.80, stdev=5215.54, samples=10 00:36:45.209 iops : min= 198, max= 332, avg=279.60, stdev=40.75, samples=10 00:36:45.209 lat (msec) : 10=65.29%, 20=30.57%, 50=3.57%, 100=0.57% 00:36:45.209 cpu : usr=92.37%, sys=6.12%, ctx=408, majf=0, minf=105 00:36:45.209 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:45.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 issued rwts: total=1400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:45.209 filename0: (groupid=0, jobs=1): err= 0: pid=3965772: Tue Nov 26 20:14:44 2024 00:36:45.209 read: IOPS=339, BW=42.5MiB/s (44.5MB/s)(213MiB/5005msec) 00:36:45.209 slat (nsec): min=5562, max=33363, avg=8663.78, stdev=1901.18 00:36:45.209 clat (usec): min=4098, max=86325, avg=8819.65, stdev=7492.02 00:36:45.209 lat (usec): min=4106, max=86334, avg=8828.32, stdev=7491.95 00:36:45.209 clat percentiles (usec): 00:36:45.209 | 1.00th=[ 4752], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6259], 00:36:45.209 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7635], 60.00th=[ 7898], 00:36:45.209 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9896], 00:36:45.209 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50070], 99.95th=[86508], 00:36:45.209 | 99.99th=[86508] 00:36:45.209 bw ( KiB/s): min=30464, max=51456, per=38.41%, avg=43468.80, stdev=6135.46, samples=10 00:36:45.209 iops : min= 238, max= 402, avg=339.60, stdev=47.93, samples=10 00:36:45.209 lat (msec) : 10=95.24%, 20=1.29%, 50=3.24%, 100=0.24% 00:36:45.209 cpu : usr=92.05%, sys=6.71%, ctx=243, majf=0, minf=104 00:36:45.209 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:45.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 issued rwts: total=1700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:45.209 filename0: (groupid=0, jobs=1): err= 0: pid=3965773: Tue Nov 26 20:14:44 2024 00:36:45.209 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(170MiB/5044msec) 00:36:45.209 slat (nsec): min=5559, max=32324, avg=8031.17, stdev=1641.44 00:36:45.209 clat (usec): min=4258, max=88919, avg=11077.31, stdev=9302.43 00:36:45.209 lat (usec): min=4267, max=88928, avg=11085.34, stdev=9302.41 00:36:45.209 clat percentiles (usec): 00:36:45.209 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7570], 00:36:45.209 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:36:45.209 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[45351], 00:36:45.209 | 99.00th=[50070], 99.50th=[50594], 99.90th=[55837], 99.95th=[88605], 00:36:45.209 | 99.99th=[88605] 00:36:45.209 bw ( KiB/s): min=21504, max=43776, per=30.74%, avg=34790.40, stdev=6395.39, samples=10 00:36:45.209 iops : min= 168, max= 342, avg=271.80, stdev=49.96, samples=10 00:36:45.209 lat (msec) : 10=67.01%, 20=27.63%, 50=4.56%, 100=0.81% 00:36:45.209 cpu : usr=94.98%, sys=4.78%, ctx=9, majf=0, minf=64 00:36:45.209 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:45.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:45.209 issued rwts: total=1361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:45.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:45.209 00:36:45.209 Run status group 0 (all jobs): 00:36:45.209 READ: bw=111MiB/s (116MB/s), 33.7MiB/s-42.5MiB/s (35.4MB/s-44.5MB/s), io=558MiB (585MB), run=5005-5046msec 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 bdev_null0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 [2024-11-26 20:14:45.159906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 bdev_null1 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.209 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.210 bdev_null2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.210 { 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme$subsystem", 00:36:45.210 "trtype": "$TEST_TRANSPORT", 00:36:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "$NVMF_PORT", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.210 "hdgst": ${hdgst:-false}, 00:36:45.210 "ddgst": ${ddgst:-false} 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 } 00:36:45.210 EOF 00:36:45.210 )") 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.210 { 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme$subsystem", 00:36:45.210 "trtype": "$TEST_TRANSPORT", 00:36:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "$NVMF_PORT", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.210 "hdgst": ${hdgst:-false}, 00:36:45.210 "ddgst": ${ddgst:-false} 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 } 00:36:45.210 EOF 00:36:45.210 )") 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:45.210 { 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme$subsystem", 00:36:45.210 "trtype": "$TEST_TRANSPORT", 00:36:45.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "$NVMF_PORT", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:45.210 "hdgst": ${hdgst:-false}, 00:36:45.210 "ddgst": ${ddgst:-false} 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 } 00:36:45.210 EOF 00:36:45.210 )") 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme0", 00:36:45.210 "trtype": "tcp", 00:36:45.210 "traddr": "10.0.0.2", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "4420", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.210 "hdgst": false, 00:36:45.210 "ddgst": false 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 },{ 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme1", 00:36:45.210 "trtype": "tcp", 00:36:45.210 "traddr": "10.0.0.2", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "4420", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:45.210 "hdgst": false, 00:36:45.210 "ddgst": false 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 },{ 00:36:45.210 "params": { 00:36:45.210 "name": "Nvme2", 00:36:45.210 "trtype": "tcp", 00:36:45.210 "traddr": "10.0.0.2", 00:36:45.210 "adrfam": "ipv4", 00:36:45.210 "trsvcid": "4420", 00:36:45.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:45.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:45.210 "hdgst": false, 00:36:45.210 "ddgst": false 00:36:45.210 }, 00:36:45.210 "method": "bdev_nvme_attach_controller" 00:36:45.210 }' 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:45.210 20:14:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.210 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:45.210 ... 00:36:45.210 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:45.210 ... 00:36:45.210 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:45.210 ... 00:36:45.210 fio-3.35 00:36:45.210 Starting 24 threads 00:36:57.448 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967081: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=658, BW=2632KiB/s (2696kB/s)(25.8MiB/10017msec) 00:36:57.448 slat (nsec): min=5741, max=86578, avg=13985.16, stdev=10715.54 00:36:57.448 clat (usec): min=8087, max=28114, avg=24199.43, stdev=2066.20 00:36:57.448 lat (usec): min=8097, max=28123, avg=24213.41, stdev=2065.36 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[12518], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:57.448 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.448 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.448 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.448 | 99.99th=[28181] 00:36:57.448 bw ( KiB/s): min= 2432, max= 2944, per=4.15%, avg=2629.20, stdev=106.54, samples=20 00:36:57.448 iops : min= 608, max= 736, avg=657.20, stdev=26.71, samples=20 00:36:57.448 lat (msec) : 10=0.70%, 20=2.21%, 50=97.09% 00:36:57.448 cpu : usr=98.77%, sys=0.92%, ctx=46, majf=0, minf=77 00:36:57.448 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:57.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967082: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=691, BW=2765KiB/s (2831kB/s)(27.0MiB/10008msec) 00:36:57.448 slat (nsec): min=5699, max=86169, avg=16751.99, stdev=13412.37 00:36:57.448 clat (usec): min=10962, max=38951, avg=23009.25, stdev=3832.47 00:36:57.448 lat (usec): min=10968, max=38957, avg=23026.00, stdev=3835.52 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[14091], 5.00th=[15664], 10.00th=[16909], 20.00th=[19792], 00:36:57.448 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:57.448 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[27395], 00:36:57.448 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:36:57.448 | 99.99th=[39060] 00:36:57.448 bw ( KiB/s): min= 2528, max= 3120, per=4.34%, avg=2753.05, stdev=189.58, samples=19 00:36:57.448 iops : min= 632, max= 780, avg=688.21, stdev=47.36, samples=19 00:36:57.448 lat (msec) : 20=21.05%, 50=78.95% 00:36:57.448 cpu : usr=98.80%, sys=0.83%, ctx=80, majf=0, minf=56 00:36:57.448 IO depths : 1=1.3%, 2=5.1%, 4=18.1%, 8=64.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:57.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 issued rwts: total=6918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967084: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=649, BW=2600KiB/s (2662kB/s)(25.4MiB/10005msec) 00:36:57.448 slat (nsec): min=5719, max=57402, avg=13357.73, stdev=8672.91 00:36:57.448 clat (usec): min=6508, max=44568, avg=24504.10, stdev=1524.30 00:36:57.448 lat (usec): min=6514, max=44586, avg=24517.45, stdev=1524.56 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[23462], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:36:57.448 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.448 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.448 | 99.00th=[26084], 99.50th=[26346], 99.90th=[44303], 99.95th=[44303], 00:36:57.448 | 99.99th=[44827] 00:36:57.448 bw ( KiB/s): min= 2432, max= 2688, per=4.09%, avg=2592.42, stdev=72.60, samples=19 00:36:57.448 iops : min= 608, max= 672, avg=648.00, stdev=18.21, samples=19 00:36:57.448 lat (msec) : 10=0.22%, 20=0.45%, 50=99.34% 00:36:57.448 cpu : usr=98.65%, sys=0.93%, ctx=127, majf=0, minf=40 00:36:57.448 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:57.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 issued rwts: total=6503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967085: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=649, BW=2598KiB/s (2660kB/s)(25.4MiB/10001msec) 00:36:57.448 slat (nsec): min=5717, max=63022, avg=15581.45, stdev=9634.77 00:36:57.448 clat (usec): min=13499, max=38483, avg=24492.35, stdev=1086.21 00:36:57.448 lat (usec): min=13505, max=38499, avg=24507.93, stdev=1086.62 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:36:57.448 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.448 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.448 | 99.00th=[26084], 99.50th=[26346], 99.90th=[38536], 99.95th=[38536], 00:36:57.448 | 99.99th=[38536] 00:36:57.448 bw ( KiB/s): min= 2432, max= 2688, per=4.09%, avg=2593.05, stdev=72.27, samples=19 00:36:57.448 iops : min= 608, max= 672, avg=648.21, stdev=18.10, samples=19 00:36:57.448 lat (msec) : 20=0.49%, 50=99.51% 00:36:57.448 cpu : usr=97.69%, sys=1.43%, ctx=850, majf=0, minf=47 00:36:57.448 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:57.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967086: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=650, BW=2603KiB/s (2666kB/s)(25.4MiB/10005msec) 00:36:57.448 slat (nsec): min=5784, max=53083, avg=13788.04, stdev=7744.76 00:36:57.448 clat (usec): min=7838, max=38938, avg=24461.12, stdev=1408.81 00:36:57.448 lat (usec): min=7844, max=38955, avg=24474.91, stdev=1408.57 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:36:57.448 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.448 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.448 | 99.00th=[26084], 99.50th=[26084], 99.90th=[39060], 99.95th=[39060], 00:36:57.448 | 99.99th=[39060] 00:36:57.448 bw ( KiB/s): min= 2432, max= 2688, per=4.09%, avg=2592.42, stdev=72.00, samples=19 00:36:57.448 iops : min= 608, max= 672, avg=648.00, stdev=18.01, samples=19 00:36:57.448 lat (msec) : 10=0.25%, 20=0.49%, 50=99.26% 00:36:57.448 cpu : usr=98.73%, sys=0.87%, ctx=88, majf=0, minf=63 00:36:57.448 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:57.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.448 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.448 filename0: (groupid=0, jobs=1): err= 0: pid=3967087: Tue Nov 26 20:14:56 2024 00:36:57.448 read: IOPS=748, BW=2994KiB/s (3066kB/s)(29.3MiB/10012msec) 00:36:57.448 slat (nsec): min=5696, max=71564, avg=11885.70, stdev=9718.68 00:36:57.448 clat (usec): min=8968, max=44646, avg=21299.69, stdev=5156.14 00:36:57.448 lat (usec): min=8977, max=44668, avg=21311.57, stdev=5158.98 00:36:57.448 clat percentiles (usec): 00:36:57.448 | 1.00th=[12387], 5.00th=[14615], 10.00th=[15664], 20.00th=[16581], 00:36:57.448 | 30.00th=[17695], 40.00th=[19006], 50.00th=[20579], 60.00th=[23987], 00:36:57.448 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25822], 95.00th=[31327], 00:36:57.448 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41681], 99.95th=[44827], 00:36:57.449 | 99.99th=[44827] 00:36:57.449 bw ( KiB/s): min= 2560, max= 3576, per=4.75%, avg=3012.84, stdev=269.90, samples=19 00:36:57.449 iops : min= 640, max= 894, avg=753.16, stdev=67.47, samples=19 00:36:57.449 lat (msec) : 10=0.41%, 20=45.50%, 50=54.09% 00:36:57.449 cpu : usr=98.36%, sys=1.19%, ctx=158, majf=0, minf=66 00:36:57.449 IO depths : 1=1.1%, 2=2.3%, 4=9.7%, 8=75.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=89.9%, 8=5.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=7495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename0: (groupid=0, jobs=1): err= 0: pid=3967088: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=649, BW=2600KiB/s (2662kB/s)(25.4MiB/10005msec) 00:36:57.449 slat (nsec): min=5752, max=62287, avg=20227.39, stdev=10826.85 00:36:57.449 clat (usec): min=8347, max=44722, avg=24437.35, stdev=1603.56 00:36:57.449 lat (usec): min=8355, max=44747, avg=24457.58, stdev=1604.02 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[19792], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:36:57.449 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25297], 00:36:57.449 | 99.00th=[26346], 99.50th=[28181], 99.90th=[44827], 99.95th=[44827], 00:36:57.449 | 99.99th=[44827] 00:36:57.449 bw ( KiB/s): min= 2416, max= 2688, per=4.08%, avg=2588.21, stdev=70.58, samples=19 00:36:57.449 iops : min= 604, max= 672, avg=646.95, stdev=17.68, samples=19 00:36:57.449 lat (msec) : 10=0.09%, 20=0.95%, 50=98.95% 00:36:57.449 cpu : usr=98.31%, sys=1.14%, ctx=190, majf=0, minf=48 00:36:57.449 IO depths : 1=5.8%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename0: (groupid=0, jobs=1): err= 0: pid=3967089: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=670, BW=2684KiB/s (2748kB/s)(26.2MiB/10016msec) 00:36:57.449 slat (nsec): min=5736, max=73243, avg=10812.69, stdev=7059.28 00:36:57.449 clat (usec): min=7952, max=28596, avg=23760.43, stdev=2511.84 00:36:57.449 lat (usec): min=7977, max=28605, avg=23771.24, stdev=2511.56 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[14746], 5.00th=[16581], 10.00th=[23462], 20.00th=[23987], 00:36:57.449 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.449 | 99.00th=[25822], 99.50th=[26084], 99.90th=[27919], 99.95th=[28443], 00:36:57.449 | 99.99th=[28705] 00:36:57.449 bw ( KiB/s): min= 2554, max= 3504, per=4.23%, avg=2680.40, stdev=216.48, samples=20 00:36:57.449 iops : min= 638, max= 876, avg=670.00, stdev=54.14, samples=20 00:36:57.449 lat (msec) : 10=0.13%, 20=9.02%, 50=90.85% 00:36:57.449 cpu : usr=98.84%, sys=0.77%, ctx=115, majf=0, minf=137 00:36:57.449 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967090: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=650, BW=2602KiB/s (2665kB/s)(25.4MiB/10009msec) 00:36:57.449 slat (nsec): min=5452, max=61743, avg=17001.64, stdev=10533.71 00:36:57.449 clat (usec): min=13511, max=31405, avg=24439.15, stdev=869.18 00:36:57.449 lat (usec): min=13531, max=31422, avg=24456.15, stdev=869.82 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:36:57.449 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.449 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.449 | 99.99th=[31327] 00:36:57.449 bw ( KiB/s): min= 2554, max= 2688, per=4.10%, avg=2599.79, stdev=61.60, samples=19 00:36:57.449 iops : min= 638, max= 672, avg=649.89, stdev=15.44, samples=19 00:36:57.449 lat (msec) : 20=0.52%, 50=99.48% 00:36:57.449 cpu : usr=99.03%, sys=0.70%, ctx=15, majf=0, minf=41 00:36:57.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967091: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=653, BW=2614KiB/s (2677kB/s)(25.6MiB/10013msec) 00:36:57.449 slat (nsec): min=5731, max=69573, avg=16238.09, stdev=11702.26 00:36:57.449 clat (usec): min=8652, max=26700, avg=24343.51, stdev=1391.69 00:36:57.449 lat (usec): min=8663, max=26707, avg=24359.75, stdev=1391.93 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[16319], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:36:57.449 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.449 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.449 | 99.99th=[26608] 00:36:57.449 bw ( KiB/s): min= 2432, max= 2816, per=4.12%, avg=2612.63, stdev=88.52, samples=19 00:36:57.449 iops : min= 608, max= 704, avg=653.05, stdev=22.12, samples=19 00:36:57.449 lat (msec) : 10=0.21%, 20=1.25%, 50=98.53% 00:36:57.449 cpu : usr=98.98%, sys=0.74%, ctx=15, majf=0, minf=55 00:36:57.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967092: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=650, BW=2604KiB/s (2666kB/s)(25.4MiB/10004msec) 00:36:57.449 slat (nsec): min=5726, max=90028, avg=17707.38, stdev=14259.31 00:36:57.449 clat (usec): min=7979, max=38295, avg=24414.33, stdev=1431.42 00:36:57.449 lat (usec): min=7985, max=38314, avg=24432.03, stdev=1430.47 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:36:57.449 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.449 | 99.00th=[26084], 99.50th=[26608], 99.90th=[38011], 99.95th=[38011], 00:36:57.449 | 99.99th=[38536] 00:36:57.449 bw ( KiB/s): min= 2432, max= 2688, per=4.09%, avg=2592.42, stdev=72.00, samples=19 00:36:57.449 iops : min= 608, max= 672, avg=648.00, stdev=18.01, samples=19 00:36:57.449 lat (msec) : 10=0.25%, 20=0.55%, 50=99.20% 00:36:57.449 cpu : usr=98.87%, sys=0.69%, ctx=68, majf=0, minf=43 00:36:57.449 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967093: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10011msec) 00:36:57.449 slat (nsec): min=5704, max=85537, avg=16341.17, stdev=12516.50 00:36:57.449 clat (usec): min=8413, max=41274, avg=23544.65, stdev=4379.38 00:36:57.449 lat (usec): min=8489, max=41280, avg=23561.00, stdev=4380.69 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[13173], 5.00th=[15533], 10.00th=[17171], 20.00th=[20317], 00:36:57.449 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:57.449 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26346], 95.00th=[31065], 00:36:57.449 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:36:57.449 | 99.99th=[41157] 00:36:57.449 bw ( KiB/s): min= 2560, max= 3017, per=4.27%, avg=2708.37, stdev=129.13, samples=19 00:36:57.449 iops : min= 640, max= 754, avg=677.05, stdev=32.24, samples=19 00:36:57.449 lat (msec) : 10=0.12%, 20=18.58%, 50=81.30% 00:36:57.449 cpu : usr=98.67%, sys=0.88%, ctx=100, majf=0, minf=74 00:36:57.449 IO depths : 1=1.4%, 2=3.4%, 4=13.0%, 8=69.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=91.5%, 8=4.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967094: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=662, BW=2651KiB/s (2715kB/s)(25.9MiB/10012msec) 00:36:57.449 slat (nsec): min=5706, max=89285, avg=20419.72, stdev=14171.70 00:36:57.449 clat (usec): min=8237, max=42845, avg=23948.60, stdev=2549.24 00:36:57.449 lat (usec): min=8261, max=42860, avg=23969.02, stdev=2550.46 00:36:57.449 clat percentiles (usec): 00:36:57.449 | 1.00th=[12649], 5.00th=[19006], 10.00th=[23725], 20.00th=[23987], 00:36:57.449 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:57.449 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:36:57.449 | 99.00th=[26084], 99.50th=[30278], 99.90th=[42730], 99.95th=[42730], 00:36:57.449 | 99.99th=[42730] 00:36:57.449 bw ( KiB/s): min= 2432, max= 3088, per=4.18%, avg=2651.37, stdev=149.87, samples=19 00:36:57.449 iops : min= 608, max= 772, avg=662.74, stdev=37.52, samples=19 00:36:57.449 lat (msec) : 10=0.63%, 20=4.97%, 50=94.39% 00:36:57.449 cpu : usr=99.07%, sys=0.64%, ctx=29, majf=0, minf=49 00:36:57.449 IO depths : 1=5.5%, 2=11.5%, 4=23.9%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:57.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.449 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.449 filename1: (groupid=0, jobs=1): err= 0: pid=3967096: Tue Nov 26 20:14:56 2024 00:36:57.449 read: IOPS=662, BW=2651KiB/s (2714kB/s)(25.9MiB/10011msec) 00:36:57.450 slat (nsec): min=5707, max=84731, avg=16659.26, stdev=12423.30 00:36:57.450 clat (usec): min=8110, max=39559, avg=23979.66, stdev=2728.49 00:36:57.450 lat (usec): min=8119, max=39576, avg=23996.31, stdev=2728.45 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[11863], 5.00th=[17433], 10.00th=[23725], 20.00th=[23987], 00:36:57.450 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[28967], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:36:57.450 | 99.99th=[39584] 00:36:57.450 bw ( KiB/s): min= 2554, max= 3120, per=4.18%, avg=2652.21, stdev=144.10, samples=19 00:36:57.450 iops : min= 638, max= 780, avg=662.95, stdev=36.01, samples=19 00:36:57.450 lat (msec) : 10=0.84%, 20=5.34%, 50=93.82% 00:36:57.450 cpu : usr=99.08%, sys=0.64%, ctx=22, majf=0, minf=41 00:36:57.450 IO depths : 1=5.6%, 2=11.2%, 4=23.4%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename1: (groupid=0, jobs=1): err= 0: pid=3967097: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10006msec) 00:36:57.450 slat (nsec): min=5604, max=79617, avg=15045.33, stdev=12374.96 00:36:57.450 clat (usec): min=6828, max=45683, avg=24338.20, stdev=2883.54 00:36:57.450 lat (usec): min=6833, max=45699, avg=24353.25, stdev=2884.11 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[14484], 5.00th=[19530], 10.00th=[22676], 20.00th=[23987], 00:36:57.450 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[27657], 00:36:57.450 | 99.00th=[33162], 99.50th=[36963], 99.90th=[45876], 99.95th=[45876], 00:36:57.450 | 99.99th=[45876] 00:36:57.450 bw ( KiB/s): min= 2400, max= 2746, per=4.12%, avg=2612.63, stdev=71.68, samples=19 00:36:57.450 iops : min= 600, max= 686, avg=653.05, stdev=17.88, samples=19 00:36:57.450 lat (msec) : 10=0.30%, 20=5.84%, 50=93.86% 00:36:57.450 cpu : usr=98.52%, sys=1.01%, ctx=103, majf=0, minf=49 00:36:57.450 IO depths : 1=0.1%, 2=0.3%, 4=1.6%, 8=80.6%, 16=17.4%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=89.4%, 8=9.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename1: (groupid=0, jobs=1): err= 0: pid=3967098: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=660, BW=2640KiB/s (2704kB/s)(25.8MiB/10006msec) 00:36:57.450 slat (nsec): min=5289, max=76728, avg=17092.00, stdev=13137.45 00:36:57.450 clat (usec): min=6903, max=45903, avg=24106.54, stdev=3395.74 00:36:57.450 lat (usec): min=6909, max=45918, avg=24123.64, stdev=3396.62 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[13042], 5.00th=[17433], 10.00th=[20579], 20.00th=[23725], 00:36:57.450 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[27657], 00:36:57.450 | 99.00th=[36439], 99.50th=[38011], 99.90th=[45876], 99.95th=[45876], 00:36:57.450 | 99.99th=[45876] 00:36:57.450 bw ( KiB/s): min= 2436, max= 2848, per=4.15%, avg=2632.32, stdev=95.20, samples=19 00:36:57.450 iops : min= 609, max= 712, avg=657.95, stdev=23.88, samples=19 00:36:57.450 lat (msec) : 10=0.29%, 20=8.12%, 50=91.60% 00:36:57.450 cpu : usr=98.87%, sys=0.81%, ctx=74, majf=0, minf=46 00:36:57.450 IO depths : 1=2.2%, 2=4.9%, 4=11.9%, 8=68.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=91.1%, 8=5.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename2: (groupid=0, jobs=1): err= 0: pid=3967099: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=649, BW=2600KiB/s (2662kB/s)(25.4MiB/10007msec) 00:36:57.450 slat (nsec): min=5711, max=74886, avg=20700.95, stdev=13074.54 00:36:57.450 clat (usec): min=4986, max=42287, avg=24416.69, stdev=1961.45 00:36:57.450 lat (usec): min=4993, max=42293, avg=24437.39, stdev=1961.76 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[17171], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:57.450 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[33162], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:36:57.450 | 99.99th=[42206] 00:36:57.450 bw ( KiB/s): min= 2432, max= 2698, per=4.09%, avg=2595.47, stdev=72.20, samples=19 00:36:57.450 iops : min= 608, max= 674, avg=648.74, stdev=18.08, samples=19 00:36:57.450 lat (msec) : 10=0.17%, 20=2.24%, 50=97.59% 00:36:57.450 cpu : usr=98.54%, sys=1.02%, ctx=91, majf=0, minf=34 00:36:57.450 IO depths : 1=5.4%, 2=11.4%, 4=23.9%, 8=52.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename2: (groupid=0, jobs=1): err= 0: pid=3967100: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=656, BW=2626KiB/s (2689kB/s)(25.7MiB/10016msec) 00:36:57.450 slat (nsec): min=5730, max=84837, avg=18188.09, stdev=14059.03 00:36:57.450 clat (usec): min=7265, max=26655, avg=24222.64, stdev=1938.35 00:36:57.450 lat (usec): min=7303, max=26662, avg=24240.83, stdev=1938.18 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[12125], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:57.450 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.450 | 99.99th=[26608] 00:36:57.450 bw ( KiB/s): min= 2432, max= 2944, per=4.14%, avg=2622.80, stdev=106.67, samples=20 00:36:57.450 iops : min= 608, max= 736, avg=655.60, stdev=26.74, samples=20 00:36:57.450 lat (msec) : 10=0.73%, 20=1.46%, 50=97.81% 00:36:57.450 cpu : usr=98.77%, sys=0.92%, ctx=59, majf=0, minf=58 00:36:57.450 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename2: (groupid=0, jobs=1): err= 0: pid=3967101: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=658, BW=2633KiB/s (2696kB/s)(25.8MiB/10016msec) 00:36:57.450 slat (nsec): min=5705, max=81235, avg=11236.10, stdev=8262.13 00:36:57.450 clat (usec): min=8204, max=26689, avg=24218.30, stdev=2097.91 00:36:57.450 lat (usec): min=8221, max=26696, avg=24229.54, stdev=2096.16 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[12649], 5.00th=[23462], 10.00th=[23987], 20.00th=[23987], 00:36:57.450 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.450 | 99.99th=[26608] 00:36:57.450 bw ( KiB/s): min= 2554, max= 2944, per=4.15%, avg=2629.20, stdev=98.11, samples=20 00:36:57.450 iops : min= 638, max= 736, avg=657.20, stdev=24.61, samples=20 00:36:57.450 lat (msec) : 10=0.88%, 20=2.03%, 50=97.09% 00:36:57.450 cpu : usr=98.98%, sys=0.71%, ctx=57, majf=0, minf=56 00:36:57.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename2: (groupid=0, jobs=1): err= 0: pid=3967102: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=652, BW=2608KiB/s (2671kB/s)(25.5MiB/10009msec) 00:36:57.450 slat (nsec): min=5528, max=66021, avg=12274.07, stdev=9174.03 00:36:57.450 clat (usec): min=8740, max=42345, avg=24485.85, stdev=1670.41 00:36:57.450 lat (usec): min=8746, max=42351, avg=24498.12, stdev=1670.69 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[16581], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:36:57.450 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[27132], 99.50th=[31065], 99.90th=[39060], 99.95th=[42206], 00:36:57.450 | 99.99th=[42206] 00:36:57.450 bw ( KiB/s): min= 2512, max= 2688, per=4.09%, avg=2595.89, stdev=46.07, samples=19 00:36:57.450 iops : min= 628, max= 672, avg=648.95, stdev=11.52, samples=19 00:36:57.450 lat (msec) : 10=0.25%, 20=1.56%, 50=98.19% 00:36:57.450 cpu : usr=98.73%, sys=0.99%, ctx=35, majf=0, minf=55 00:36:57.450 IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=80.7%, 16=18.3%, 32=0.0%, >=64=0.0% 00:36:57.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 complete : 0=0.0%, 4=89.6%, 8=10.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.450 issued rwts: total=6526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.450 filename2: (groupid=0, jobs=1): err= 0: pid=3967103: Tue Nov 26 20:14:56 2024 00:36:57.450 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:36:57.450 slat (nsec): min=5730, max=70294, avg=15628.14, stdev=10466.10 00:36:57.450 clat (usec): min=7695, max=30146, avg=24267.10, stdev=1918.42 00:36:57.450 lat (usec): min=7705, max=30153, avg=24282.73, stdev=1917.78 00:36:57.450 clat percentiles (usec): 00:36:57.450 | 1.00th=[12256], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:36:57.450 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.450 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.450 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26608], 99.95th=[26608], 00:36:57.450 | 99.99th=[30016] 00:36:57.450 bw ( KiB/s): min= 2432, max= 2944, per=4.13%, avg=2619.37, stdev=108.45, samples=19 00:36:57.450 iops : min= 608, max= 736, avg=654.74, stdev=27.18, samples=19 00:36:57.450 lat (msec) : 10=0.70%, 20=1.28%, 50=98.02% 00:36:57.450 cpu : usr=98.78%, sys=0.91%, ctx=49, majf=0, minf=41 00:36:57.451 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:57.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.451 filename2: (groupid=0, jobs=1): err= 0: pid=3967104: Tue Nov 26 20:14:56 2024 00:36:57.451 read: IOPS=656, BW=2625KiB/s (2688kB/s)(25.6MiB/10006msec) 00:36:57.451 slat (nsec): min=5518, max=68229, avg=18869.93, stdev=11164.96 00:36:57.451 clat (usec): min=8370, max=39747, avg=24211.37, stdev=2298.51 00:36:57.451 lat (usec): min=8376, max=39762, avg=24230.24, stdev=2299.34 00:36:57.451 clat percentiles (usec): 00:36:57.451 | 1.00th=[15008], 5.00th=[19792], 10.00th=[23725], 20.00th=[23987], 00:36:57.451 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:57.451 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.451 | 99.00th=[31851], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:36:57.451 | 99.99th=[39584] 00:36:57.451 bw ( KiB/s): min= 2436, max= 2922, per=4.13%, avg=2621.79, stdev=112.64, samples=19 00:36:57.451 iops : min= 609, max= 730, avg=655.32, stdev=28.08, samples=19 00:36:57.451 lat (msec) : 10=0.24%, 20=4.87%, 50=94.88% 00:36:57.451 cpu : usr=98.96%, sys=0.76%, ctx=26, majf=0, minf=35 00:36:57.451 IO depths : 1=5.7%, 2=11.4%, 4=23.3%, 8=52.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:57.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 issued rwts: total=6566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.451 filename2: (groupid=0, jobs=1): err= 0: pid=3967105: Tue Nov 26 20:14:56 2024 00:36:57.451 read: IOPS=651, BW=2605KiB/s (2667kB/s)(25.4MiB/10004msec) 00:36:57.451 slat (nsec): min=5713, max=65797, avg=18424.19, stdev=10196.38 00:36:57.451 clat (usec): min=5885, max=44735, avg=24412.90, stdev=2266.16 00:36:57.451 lat (usec): min=5892, max=44750, avg=24431.32, stdev=2266.57 00:36:57.451 clat percentiles (usec): 00:36:57.451 | 1.00th=[15008], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:57.451 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:36:57.451 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:57.451 | 99.00th=[32900], 99.50th=[36439], 99.90th=[44827], 99.95th=[44827], 00:36:57.451 | 99.99th=[44827] 00:36:57.451 bw ( KiB/s): min= 2432, max= 2778, per=4.10%, avg=2597.47, stdev=81.40, samples=19 00:36:57.451 iops : min= 608, max= 694, avg=649.26, stdev=20.33, samples=19 00:36:57.451 lat (msec) : 10=0.25%, 20=2.56%, 50=97.19% 00:36:57.451 cpu : usr=98.90%, sys=0.83%, ctx=30, majf=0, minf=32 00:36:57.451 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:57.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 issued rwts: total=6514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.451 filename2: (groupid=0, jobs=1): err= 0: pid=3967106: Tue Nov 26 20:14:56 2024 00:36:57.451 read: IOPS=682, BW=2731KiB/s (2796kB/s)(26.8MiB/10046msec) 00:36:57.451 slat (nsec): min=5697, max=87429, avg=14415.20, stdev=11921.64 00:36:57.451 clat (usec): min=11839, max=58802, avg=23274.18, stdev=4331.16 00:36:57.451 lat (usec): min=11846, max=58808, avg=23288.59, stdev=4332.74 00:36:57.451 clat percentiles (usec): 00:36:57.451 | 1.00th=[14746], 5.00th=[16057], 10.00th=[17433], 20.00th=[19530], 00:36:57.451 | 30.00th=[21627], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:57.451 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26870], 95.00th=[30016], 00:36:57.451 | 99.00th=[36963], 99.50th=[39060], 99.90th=[44827], 99.95th=[58983], 00:36:57.451 | 99.99th=[58983] 00:36:57.451 bw ( KiB/s): min= 2560, max= 3024, per=4.32%, avg=2739.90, stdev=127.31, samples=20 00:36:57.451 iops : min= 640, max= 756, avg=684.90, stdev=31.85, samples=20 00:36:57.451 lat (msec) : 20=22.57%, 50=77.34%, 100=0.09% 00:36:57.451 cpu : usr=98.80%, sys=0.94%, ctx=41, majf=0, minf=40 00:36:57.451 IO depths : 1=2.3%, 2=4.7%, 4=12.0%, 8=69.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:36:57.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 complete : 0=0.0%, 4=90.7%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.451 issued rwts: total=6858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:57.451 00:36:57.451 Run status group 0 (all jobs): 00:36:57.451 READ: bw=61.9MiB/s (64.9MB/s), 2598KiB/s-2994KiB/s (2660kB/s-3066kB/s), io=622MiB (652MB), run=10001-10046msec 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 bdev_null0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.451 [2024-11-26 20:14:56.939566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.451 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.452 bdev_null1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.452 { 00:36:57.452 "params": { 00:36:57.452 "name": "Nvme$subsystem", 00:36:57.452 "trtype": "$TEST_TRANSPORT", 00:36:57.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.452 "adrfam": "ipv4", 00:36:57.452 "trsvcid": "$NVMF_PORT", 00:36:57.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.452 "hdgst": ${hdgst:-false}, 00:36:57.452 "ddgst": ${ddgst:-false} 00:36:57.452 }, 00:36:57.452 "method": "bdev_nvme_attach_controller" 00:36:57.452 } 00:36:57.452 EOF 00:36:57.452 )") 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:57.452 20:14:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.452 { 00:36:57.452 "params": { 00:36:57.452 "name": "Nvme$subsystem", 00:36:57.452 "trtype": "$TEST_TRANSPORT", 00:36:57.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.452 "adrfam": "ipv4", 00:36:57.452 "trsvcid": "$NVMF_PORT", 00:36:57.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.452 "hdgst": ${hdgst:-false}, 00:36:57.452 "ddgst": ${ddgst:-false} 00:36:57.452 }, 00:36:57.452 "method": "bdev_nvme_attach_controller" 00:36:57.452 } 00:36:57.452 EOF 00:36:57.452 )") 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:57.452 "params": { 00:36:57.452 "name": "Nvme0", 00:36:57.452 "trtype": "tcp", 00:36:57.452 "traddr": "10.0.0.2", 00:36:57.452 "adrfam": "ipv4", 00:36:57.452 "trsvcid": "4420", 00:36:57.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.452 "hdgst": false, 00:36:57.452 "ddgst": false 00:36:57.452 }, 00:36:57.452 "method": "bdev_nvme_attach_controller" 00:36:57.452 },{ 00:36:57.452 "params": { 00:36:57.452 "name": "Nvme1", 00:36:57.452 "trtype": "tcp", 00:36:57.452 "traddr": "10.0.0.2", 00:36:57.452 "adrfam": "ipv4", 00:36:57.452 "trsvcid": "4420", 00:36:57.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:57.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:57.452 "hdgst": false, 00:36:57.452 "ddgst": false 00:36:57.452 }, 00:36:57.452 "method": "bdev_nvme_attach_controller" 00:36:57.452 }' 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:57.452 20:14:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.452 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:57.452 ... 00:36:57.452 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:57.452 ... 00:36:57.452 fio-3.35 00:36:57.452 Starting 4 threads 00:37:02.741 00:37:02.741 filename0: (groupid=0, jobs=1): err= 0: pid=3969484: Tue Nov 26 20:15:03 2024 00:37:02.741 read: IOPS=2732, BW=21.3MiB/s (22.4MB/s)(107MiB/5003msec) 00:37:02.741 slat (nsec): min=5538, max=41798, avg=6319.55, stdev=2167.30 00:37:02.741 clat (usec): min=1641, max=7392, avg=2910.80, stdev=419.68 00:37:02.741 lat (usec): min=1646, max=7397, avg=2917.12, stdev=419.64 00:37:02.741 clat percentiles (usec): 00:37:02.741 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2704], 00:37:02.741 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:37:02.741 | 70.00th=[ 2966], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3818], 00:37:02.741 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4948], 99.95th=[ 5866], 00:37:02.741 | 99.99th=[ 7373] 00:37:02.741 bw ( KiB/s): min=20912, max=22832, per=23.47%, avg=21790.22, stdev=830.36, samples=9 00:37:02.741 iops : min= 2614, max= 2854, avg=2723.78, stdev=103.80, samples=9 00:37:02.741 lat (msec) : 2=0.25%, 4=96.60%, 10=3.15% 00:37:02.741 cpu : usr=97.00%, sys=2.76%, ctx=6, majf=0, minf=9 00:37:02.741 IO depths : 1=0.1%, 2=0.2%, 4=70.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.741 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.741 issued rwts: total=13672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:02.742 filename0: (groupid=0, jobs=1): err= 0: pid=3969485: Tue Nov 26 20:15:03 2024 00:37:02.742 read: IOPS=3306, BW=25.8MiB/s (27.1MB/s)(129MiB/5002msec) 00:37:02.742 slat (nsec): min=5530, max=49761, avg=6019.29, stdev=1471.24 00:37:02.742 clat (usec): min=756, max=4260, avg=2399.28, stdev=357.77 00:37:02.742 lat (usec): min=778, max=4266, avg=2405.30, stdev=357.56 00:37:02.742 clat percentiles (usec): 00:37:02.742 | 1.00th=[ 1500], 5.00th=[ 1893], 10.00th=[ 2024], 20.00th=[ 2089], 00:37:02.742 | 30.00th=[ 2212], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2507], 00:37:02.742 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2737], 95.00th=[ 2868], 00:37:02.742 | 99.00th=[ 3425], 99.50th=[ 3654], 99.90th=[ 4080], 99.95th=[ 4228], 00:37:02.742 | 99.99th=[ 4228] 00:37:02.742 bw ( KiB/s): min=23760, max=28800, per=28.77%, avg=26712.89, stdev=2157.59, samples=9 00:37:02.742 iops : min= 2970, max= 3600, avg=3339.11, stdev=269.70, samples=9 00:37:02.742 lat (usec) : 1000=0.10% 00:37:02.742 lat (msec) : 2=7.61%, 4=92.16%, 10=0.13% 00:37:02.742 cpu : usr=95.74%, sys=3.30%, ctx=176, majf=0, minf=9 00:37:02.742 IO depths : 1=0.5%, 2=11.5%, 4=61.7%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 issued rwts: total=16540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:02.742 filename1: (groupid=0, jobs=1): err= 0: pid=3969486: Tue Nov 26 20:15:03 2024 00:37:02.742 read: IOPS=2815, BW=22.0MiB/s (23.1MB/s)(110MiB/5003msec) 00:37:02.742 slat (nsec): min=5536, max=62837, avg=6429.88, stdev=2445.33 00:37:02.742 clat (usec): min=1458, max=5562, avg=2824.02, stdev=321.04 00:37:02.742 lat (usec): min=1464, max=5568, avg=2830.45, stdev=321.01 00:37:02.742 clat percentiles (usec): 00:37:02.742 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:02.742 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:37:02.742 | 70.00th=[ 2802], 80.00th=[ 2999], 90.00th=[ 3326], 95.00th=[ 3392], 00:37:02.742 | 99.00th=[ 3916], 99.50th=[ 4178], 99.90th=[ 4490], 99.95th=[ 4621], 00:37:02.742 | 99.99th=[ 5538] 00:37:02.742 bw ( KiB/s): min=21856, max=23440, per=24.21%, avg=22476.44, stdev=653.15, samples=9 00:37:02.742 iops : min= 2732, max= 2930, avg=2809.56, stdev=81.64, samples=9 00:37:02.742 lat (msec) : 2=0.13%, 4=99.08%, 10=0.79% 00:37:02.742 cpu : usr=95.10%, sys=3.90%, ctx=214, majf=0, minf=9 00:37:02.742 IO depths : 1=0.1%, 2=0.1%, 4=70.5%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 issued rwts: total=14088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:02.742 filename1: (groupid=0, jobs=1): err= 0: pid=3969487: Tue Nov 26 20:15:03 2024 00:37:02.742 read: IOPS=2753, BW=21.5MiB/s (22.6MB/s)(108MiB/5004msec) 00:37:02.742 slat (nsec): min=5529, max=61658, avg=6303.88, stdev=2391.12 00:37:02.742 clat (usec): min=1428, max=5491, avg=2889.59, stdev=413.80 00:37:02.742 lat (usec): min=1434, max=5497, avg=2895.90, stdev=413.76 00:37:02.742 clat percentiles (usec): 00:37:02.742 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:37:02.742 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:37:02.742 | 70.00th=[ 2900], 80.00th=[ 3163], 90.00th=[ 3425], 95.00th=[ 3752], 00:37:02.742 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4948], 99.95th=[ 5473], 00:37:02.742 | 99.99th=[ 5473] 00:37:02.742 bw ( KiB/s): min=21152, max=23024, per=23.73%, avg=22035.20, stdev=754.01, samples=10 00:37:02.742 iops : min= 2644, max= 2878, avg=2754.40, stdev=94.25, samples=10 00:37:02.742 lat (msec) : 2=0.41%, 4=96.68%, 10=2.90% 00:37:02.742 cpu : usr=96.48%, sys=3.26%, ctx=21, majf=0, minf=9 00:37:02.742 IO depths : 1=0.1%, 2=0.3%, 4=69.3%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.742 issued rwts: total=13777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:02.742 00:37:02.742 Run status group 0 (all jobs): 00:37:02.742 READ: bw=90.7MiB/s (95.1MB/s), 21.3MiB/s-25.8MiB/s (22.4MB/s-27.1MB/s), io=454MiB (476MB), run=5002-5004msec 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.742 00:37:02.742 real 0m24.415s 00:37:02.742 user 5m16.098s 00:37:02.742 sys 0m4.715s 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 ************************************ 00:37:02.742 END TEST fio_dif_rand_params 00:37:02.742 ************************************ 00:37:02.742 20:15:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:02.742 20:15:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:02.742 20:15:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 ************************************ 00:37:02.742 START TEST fio_dif_digest 00:37:02.742 ************************************ 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.742 bdev_null0 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.742 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.743 [2024-11-26 20:15:03.444596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:02.743 { 00:37:02.743 "params": { 00:37:02.743 "name": "Nvme$subsystem", 00:37:02.743 "trtype": "$TEST_TRANSPORT", 00:37:02.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.743 "adrfam": "ipv4", 00:37:02.743 "trsvcid": "$NVMF_PORT", 00:37:02.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.743 "hdgst": ${hdgst:-false}, 00:37:02.743 "ddgst": ${ddgst:-false} 00:37:02.743 }, 00:37:02.743 "method": "bdev_nvme_attach_controller" 00:37:02.743 } 00:37:02.743 EOF 00:37:02.743 )") 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:02.743 "params": { 00:37:02.743 "name": "Nvme0", 00:37:02.743 "trtype": "tcp", 00:37:02.743 "traddr": "10.0.0.2", 00:37:02.743 "adrfam": "ipv4", 00:37:02.743 "trsvcid": "4420", 00:37:02.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.743 "hdgst": true, 00:37:02.743 "ddgst": true 00:37:02.743 }, 00:37:02.743 "method": "bdev_nvme_attach_controller" 00:37:02.743 }' 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:02.743 20:15:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.336 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:03.336 ... 00:37:03.336 fio-3.35 00:37:03.336 Starting 3 threads 00:37:15.563 00:37:15.563 filename0: (groupid=0, jobs=1): err= 0: pid=3970869: Tue Nov 26 20:15:14 2024 00:37:15.563 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10048msec) 00:37:15.563 slat (usec): min=5, max=588, avg= 9.37, stdev=11.01 00:37:15.563 clat (usec): min=7323, max=52725, avg=10299.96, stdev=2314.76 00:37:15.563 lat (usec): min=7329, max=52733, avg=10309.33, stdev=2314.85 00:37:15.563 clat percentiles (usec): 00:37:15.563 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:37:15.563 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:37:15.563 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:37:15.563 | 99.00th=[12256], 99.50th=[13042], 99.90th=[52691], 99.95th=[52691], 00:37:15.563 | 99.99th=[52691] 00:37:15.563 bw ( KiB/s): min=34560, max=38144, per=33.99%, avg=37337.60, stdev=966.02, samples=20 00:37:15.563 iops : min= 270, max= 298, avg=291.70, stdev= 7.55, samples=20 00:37:15.563 lat (msec) : 10=39.40%, 20=60.23%, 50=0.14%, 100=0.24% 00:37:15.563 cpu : usr=94.72%, sys=5.00%, ctx=19, majf=0, minf=99 00:37:15.563 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:15.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 issued rwts: total=2919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.563 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:15.563 filename0: (groupid=0, jobs=1): err= 0: pid=3970870: Tue Nov 26 20:15:14 2024 00:37:15.563 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(356MiB/10046msec) 00:37:15.563 slat (nsec): min=5896, max=52102, avg=7381.10, stdev=2191.88 00:37:15.563 clat (usec): min=6738, max=47540, avg=10550.73, stdev=1280.01 00:37:15.563 lat (usec): min=6745, max=47547, avg=10558.11, stdev=1280.06 00:37:15.563 clat percentiles (usec): 00:37:15.563 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:37:15.563 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:37:15.563 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:37:15.563 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14091], 99.95th=[45351], 00:37:15.563 | 99.99th=[47449] 00:37:15.563 bw ( KiB/s): min=35584, max=38144, per=33.19%, avg=36454.40, stdev=623.76, samples=20 00:37:15.563 iops : min= 278, max= 298, avg=284.80, stdev= 4.87, samples=20 00:37:15.563 lat (msec) : 10=24.28%, 20=75.65%, 50=0.07% 00:37:15.563 cpu : usr=95.34%, sys=4.42%, ctx=14, majf=0, minf=199 00:37:15.563 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:15.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 issued rwts: total=2850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.563 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:15.563 filename0: (groupid=0, jobs=1): err= 0: pid=3970871: Tue Nov 26 20:15:14 2024 00:37:15.563 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(357MiB/10047msec) 00:37:15.563 slat (nsec): min=5910, max=40380, avg=7455.30, stdev=1718.12 00:37:15.563 clat (usec): min=6637, max=50928, avg=10540.93, stdev=1327.14 00:37:15.563 lat (usec): min=6643, max=50935, avg=10548.39, stdev=1327.23 00:37:15.563 clat percentiles (usec): 00:37:15.563 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:37:15.563 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:37:15.563 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:37:15.563 | 99.00th=[12518], 99.50th=[12780], 99.90th=[15270], 99.95th=[47449], 00:37:15.563 | 99.99th=[51119] 00:37:15.563 bw ( KiB/s): min=35840, max=38144, per=33.22%, avg=36492.80, stdev=629.68, samples=20 00:37:15.563 iops : min= 280, max= 298, avg=285.10, stdev= 4.92, samples=20 00:37:15.563 lat (msec) : 10=25.06%, 20=74.87%, 50=0.04%, 100=0.04% 00:37:15.563 cpu : usr=95.79%, sys=3.98%, ctx=15, majf=0, minf=117 00:37:15.563 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:15.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:15.563 issued rwts: total=2853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:15.563 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:15.563 00:37:15.563 Run status group 0 (all jobs): 00:37:15.563 READ: bw=107MiB/s (112MB/s), 35.5MiB/s-36.3MiB/s (37.2MB/s-38.1MB/s), io=1078MiB (1130MB), run=10046-10048msec 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.563 00:37:15.563 real 0m11.143s 00:37:15.563 user 0m44.599s 00:37:15.563 sys 0m1.641s 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.563 20:15:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.563 ************************************ 00:37:15.563 END TEST fio_dif_digest 00:37:15.563 ************************************ 00:37:15.563 20:15:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:15.563 20:15:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:15.563 20:15:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.564 rmmod nvme_tcp 00:37:15.564 rmmod nvme_fabrics 00:37:15.564 rmmod nvme_keyring 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3960512 ']' 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3960512 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3960512 ']' 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3960512 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3960512 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3960512' 00:37:15.564 killing process with pid 3960512 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3960512 00:37:15.564 20:15:14 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3960512 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:15.564 20:15:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:17.475 Waiting for block devices as requested 00:37:17.475 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:17.475 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:17.475 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:17.735 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:17.735 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:17.735 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:17.997 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:17.997 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:17.997 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:18.257 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:18.257 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:18.518 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:18.518 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:18.518 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:18.778 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:18.778 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:18.778 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:19.039 20:15:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.039 20:15:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:19.039 20:15:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.584 20:15:21 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:21.585 00:37:21.585 real 1m18.668s 00:37:21.585 user 7m56.350s 00:37:21.585 sys 0m22.292s 00:37:21.585 20:15:21 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.585 20:15:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:21.585 ************************************ 00:37:21.585 END TEST nvmf_dif 00:37:21.585 ************************************ 00:37:21.585 20:15:21 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:21.585 20:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:21.585 20:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.585 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:37:21.585 ************************************ 00:37:21.585 START TEST nvmf_abort_qd_sizes 00:37:21.585 ************************************ 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:21.585 * Looking for test storage... 00:37:21.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.585 --rc genhtml_branch_coverage=1 00:37:21.585 --rc genhtml_function_coverage=1 00:37:21.585 --rc genhtml_legend=1 00:37:21.585 --rc geninfo_all_blocks=1 00:37:21.585 --rc geninfo_unexecuted_blocks=1 00:37:21.585 00:37:21.585 ' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.585 --rc genhtml_branch_coverage=1 00:37:21.585 --rc genhtml_function_coverage=1 00:37:21.585 --rc genhtml_legend=1 00:37:21.585 --rc geninfo_all_blocks=1 00:37:21.585 --rc geninfo_unexecuted_blocks=1 00:37:21.585 00:37:21.585 ' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.585 --rc genhtml_branch_coverage=1 00:37:21.585 --rc genhtml_function_coverage=1 00:37:21.585 --rc genhtml_legend=1 00:37:21.585 --rc geninfo_all_blocks=1 00:37:21.585 --rc geninfo_unexecuted_blocks=1 00:37:21.585 00:37:21.585 ' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:21.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:21.585 --rc genhtml_branch_coverage=1 00:37:21.585 --rc genhtml_function_coverage=1 00:37:21.585 --rc genhtml_legend=1 00:37:21.585 --rc geninfo_all_blocks=1 00:37:21.585 --rc geninfo_unexecuted_blocks=1 00:37:21.585 00:37:21.585 ' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:21.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.585 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.586 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:21.586 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:21.586 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:21.586 20:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:21.586 20:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.727 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:29.728 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:29.728 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:29.728 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:29.728 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:37:29.728 00:37:29.728 --- 10.0.0.2 ping statistics --- 00:37:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.728 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:37:29.728 00:37:29.728 --- 10.0.0.1 ping statistics --- 00:37:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.728 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:37:29.728 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.729 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:29.729 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:29.729 20:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:32.272 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:32.272 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3980856 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3980856 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3980856 ']' 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:32.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.845 20:15:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:32.845 [2024-11-26 20:15:33.526659] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:37:32.845 [2024-11-26 20:15:33.526725] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.845 [2024-11-26 20:15:33.623070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:32.845 [2024-11-26 20:15:33.662755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.845 [2024-11-26 20:15:33.662788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.845 [2024-11-26 20:15:33.662796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.845 [2024-11-26 20:15:33.662803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.845 [2024-11-26 20:15:33.662809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.106 [2024-11-26 20:15:33.664451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.106 [2024-11-26 20:15:33.664604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.106 [2024-11-26 20:15:33.664740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.106 [2024-11-26 20:15:33.664741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:33.677 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.678 20:15:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.678 ************************************ 00:37:33.678 START TEST spdk_target_abort 00:37:33.678 ************************************ 00:37:33.678 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:33.678 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:33.678 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:33.678 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.678 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.939 spdk_targetn1 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.939 [2024-11-26 20:15:34.736223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.939 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.200 [2024-11-26 20:15:34.788564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:34.200 20:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.467 [2024-11-26 20:15:35.022337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.022390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.040852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:640 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.040889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0053 p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.057446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1152 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.057482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.080748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1784 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.080783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e2 p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.081423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1832 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.081449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e7 p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.088730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2008 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.088762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:37:34.467 [2024-11-26 20:15:35.119392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2976 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:34.467 [2024-11-26 20:15:35.119428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:37.908 Initializing NVMe Controllers 00:37:37.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:37.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:37.908 Initialization complete. Launching workers. 00:37:37.908 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11132, failed: 7 00:37:37.908 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2627, failed to submit 8512 00:37:37.908 success 732, unsuccessful 1895, failed 0 00:37:37.908 20:15:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.908 20:15:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.908 [2024-11-26 20:15:38.281449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:37.908 [2024-11-26 20:15:38.281487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:37.908 [2024-11-26 20:15:38.312270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:1176 len:8 PRP1 0x200004e52000 PRP2 0x0 00:37:37.908 [2024-11-26 20:15:38.312294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:37:37.908 [2024-11-26 20:15:38.320260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:1376 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:37.908 [2024-11-26 20:15:38.320280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:37:41.207 Initializing NVMe Controllers 00:37:41.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:41.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:41.207 Initialization complete. Launching workers. 00:37:41.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8505, failed: 3 00:37:41.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7282 00:37:41.207 success 350, unsuccessful 876, failed 0 00:37:41.207 20:15:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:41.207 20:15:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:41.207 [2024-11-26 20:15:41.717861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:154 nsid:1 lba:3536 len:8 PRP1 0x200004b06000 PRP2 0x0 00:37:41.207 [2024-11-26 20:15:41.717892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:154 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:41.778 [2024-11-26 20:15:42.296678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:70592 len:8 PRP1 0x200004b00000 PRP2 0x0 00:37:41.778 [2024-11-26 20:15:42.296701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.160 [2024-11-26 20:15:43.845699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:251048 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:43.160 [2024-11-26 20:15:43.845721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:37:44.114 Initializing NVMe Controllers 00:37:44.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:44.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:44.114 Initialization complete. Launching workers. 00:37:44.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43650, failed: 3 00:37:44.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2668, failed to submit 40985 00:37:44.114 success 596, unsuccessful 2072, failed 0 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.114 20:15:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3980856 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3980856 ']' 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3980856 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980856 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980856' 00:37:46.027 killing process with pid 3980856 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3980856 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3980856 00:37:46.027 00:37:46.027 real 0m12.318s 00:37:46.027 user 0m50.145s 00:37:46.027 sys 0m2.035s 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.027 ************************************ 00:37:46.027 END TEST spdk_target_abort 00:37:46.027 ************************************ 00:37:46.027 20:15:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:46.027 20:15:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:46.027 20:15:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.027 20:15:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:46.027 ************************************ 00:37:46.027 START TEST kernel_target_abort 00:37:46.027 ************************************ 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:46.027 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:46.028 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:46.289 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:46.289 20:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:49.588 Waiting for block devices as requested 00:37:49.588 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:49.588 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:49.848 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:49.848 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:49.848 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:50.108 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:50.108 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:50.108 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:50.369 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:50.369 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:50.629 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:50.629 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:50.629 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:50.890 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:50.890 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:50.890 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:50.890 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:51.461 No valid GPT data, bailing 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:51.461 00:37:51.461 Discovery Log Number of Records 2, Generation counter 2 00:37:51.461 =====Discovery Log Entry 0====== 00:37:51.461 trtype: tcp 00:37:51.461 adrfam: ipv4 00:37:51.461 subtype: current discovery subsystem 00:37:51.461 treq: not specified, sq flow control disable supported 00:37:51.461 portid: 1 00:37:51.461 trsvcid: 4420 00:37:51.461 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:51.461 traddr: 10.0.0.1 00:37:51.461 eflags: none 00:37:51.461 sectype: none 00:37:51.461 =====Discovery Log Entry 1====== 00:37:51.461 trtype: tcp 00:37:51.461 adrfam: ipv4 00:37:51.461 subtype: nvme subsystem 00:37:51.461 treq: not specified, sq flow control disable supported 00:37:51.461 portid: 1 00:37:51.461 trsvcid: 4420 00:37:51.461 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:51.461 traddr: 10.0.0.1 00:37:51.461 eflags: none 00:37:51.461 sectype: none 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:51.461 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:51.462 20:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.764 Initializing NVMe Controllers 00:37:54.764 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:54.764 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:54.764 Initialization complete. Launching workers. 00:37:54.764 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67072, failed: 0 00:37:54.764 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67072, failed to submit 0 00:37:54.764 success 0, unsuccessful 67072, failed 0 00:37:54.764 20:15:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:54.764 20:15:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:58.068 Initializing NVMe Controllers 00:37:58.068 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:58.068 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:58.068 Initialization complete. Launching workers. 00:37:58.068 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117385, failed: 0 00:37:58.068 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29562, failed to submit 87823 00:37:58.068 success 0, unsuccessful 29562, failed 0 00:37:58.068 20:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:58.068 20:15:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:01.387 Initializing NVMe Controllers 00:38:01.387 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:01.387 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:01.387 Initialization complete. Launching workers. 00:38:01.387 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146451, failed: 0 00:38:01.387 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36670, failed to submit 109781 00:38:01.387 success 0, unsuccessful 36670, failed 0 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:01.387 20:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:04.689 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:04.689 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:06.603 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:06.603 00:38:06.603 real 0m20.560s 00:38:06.603 user 0m9.962s 00:38:06.603 sys 0m6.212s 00:38:06.603 20:16:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.603 20:16:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:06.603 ************************************ 00:38:06.603 END TEST kernel_target_abort 00:38:06.603 ************************************ 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:06.862 rmmod nvme_tcp 00:38:06.862 rmmod nvme_fabrics 00:38:06.862 rmmod nvme_keyring 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3980856 ']' 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3980856 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3980856 ']' 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3980856 00:38:06.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3980856) - No such process 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3980856 is not found' 00:38:06.862 Process with pid 3980856 is not found 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:06.862 20:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:10.164 Waiting for block devices as requested 00:38:10.164 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.424 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.424 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:10.424 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:10.685 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:10.685 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:10.685 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:10.685 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:10.946 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:10.946 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:11.207 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:11.207 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.207 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.470 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:11.470 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.471 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.734 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.994 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:11.995 20:16:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:14.542 20:16:14 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:14.542 00:38:14.542 real 0m52.740s 00:38:14.542 user 1m5.522s 00:38:14.542 sys 0m19.366s 00:38:14.542 20:16:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.542 20:16:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:14.542 ************************************ 00:38:14.542 END TEST nvmf_abort_qd_sizes 00:38:14.542 ************************************ 00:38:14.542 20:16:14 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:14.542 20:16:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.542 20:16:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.542 20:16:14 -- common/autotest_common.sh@10 -- # set +x 00:38:14.542 ************************************ 00:38:14.542 START TEST keyring_file 00:38:14.542 ************************************ 00:38:14.542 20:16:14 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:14.542 * Looking for test storage... 00:38:14.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:14.542 20:16:14 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:14.542 20:16:14 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:14.542 20:16:14 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.542 20:16:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:14.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.542 --rc genhtml_branch_coverage=1 00:38:14.542 --rc genhtml_function_coverage=1 00:38:14.542 --rc genhtml_legend=1 00:38:14.542 --rc geninfo_all_blocks=1 00:38:14.542 --rc geninfo_unexecuted_blocks=1 00:38:14.542 00:38:14.542 ' 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:14.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.542 --rc genhtml_branch_coverage=1 00:38:14.542 --rc genhtml_function_coverage=1 00:38:14.542 --rc genhtml_legend=1 00:38:14.542 --rc geninfo_all_blocks=1 00:38:14.542 --rc geninfo_unexecuted_blocks=1 00:38:14.542 00:38:14.542 ' 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:14.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.542 --rc genhtml_branch_coverage=1 00:38:14.542 --rc genhtml_function_coverage=1 00:38:14.542 --rc genhtml_legend=1 00:38:14.542 --rc geninfo_all_blocks=1 00:38:14.542 --rc geninfo_unexecuted_blocks=1 00:38:14.542 00:38:14.542 ' 00:38:14.542 20:16:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:14.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.542 --rc genhtml_branch_coverage=1 00:38:14.542 --rc genhtml_function_coverage=1 00:38:14.542 --rc genhtml_legend=1 00:38:14.542 --rc geninfo_all_blocks=1 00:38:14.542 --rc geninfo_unexecuted_blocks=1 00:38:14.542 00:38:14.542 ' 00:38:14.542 20:16:15 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:14.542 20:16:15 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.542 20:16:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.543 20:16:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.543 20:16:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.543 20:16:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.543 20:16:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.543 20:16:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.543 20:16:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.543 20:16:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.543 20:16:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:14.543 20:16:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kkv9SFIkPF 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kkv9SFIkPF 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kkv9SFIkPF 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.kkv9SFIkPF 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jIRBt5JwB5 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:14.543 20:16:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jIRBt5JwB5 00:38:14.543 20:16:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jIRBt5JwB5 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jIRBt5JwB5 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=3991207 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3991207 00:38:14.543 20:16:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3991207 ']' 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.543 20:16:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:14.543 [2024-11-26 20:16:15.229519] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:38:14.543 [2024-11-26 20:16:15.229577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991207 ] 00:38:14.543 [2024-11-26 20:16:15.315199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.543 [2024-11-26 20:16:15.352055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:15.486 20:16:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:15.486 [2024-11-26 20:16:16.015188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.486 null0 00:38:15.486 [2024-11-26 20:16:16.047236] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:15.486 [2024-11-26 20:16:16.047615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.486 20:16:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:15.486 [2024-11-26 20:16:16.079310] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:15.486 request: 00:38:15.486 { 00:38:15.486 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:15.486 "secure_channel": false, 00:38:15.486 "listen_address": { 00:38:15.486 "trtype": "tcp", 00:38:15.486 "traddr": "127.0.0.1", 00:38:15.486 "trsvcid": "4420" 00:38:15.486 }, 00:38:15.486 "method": "nvmf_subsystem_add_listener", 00:38:15.486 "req_id": 1 00:38:15.486 } 00:38:15.486 Got JSON-RPC error response 00:38:15.486 response: 00:38:15.486 { 00:38:15.486 "code": -32602, 00:38:15.486 "message": "Invalid parameters" 00:38:15.486 } 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:15.486 20:16:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=3991244 00:38:15.486 20:16:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3991244 /var/tmp/bperf.sock 00:38:15.486 20:16:16 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3991244 ']' 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:15.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:15.486 20:16:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:15.486 [2024-11-26 20:16:16.135893] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:38:15.486 [2024-11-26 20:16:16.135942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3991244 ] 00:38:15.486 [2024-11-26 20:16:16.223242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.486 [2024-11-26 20:16:16.260253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.430 20:16:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.430 20:16:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:16.430 20:16:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:16.430 20:16:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:16.430 20:16:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jIRBt5JwB5 00:38:16.430 20:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jIRBt5JwB5 00:38:16.691 20:16:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:16.691 20:16:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:16.691 20:16:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:16.691 20:16:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:16.691 20:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.953 20:16:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kkv9SFIkPF == \/\t\m\p\/\t\m\p\.\k\k\v\9\S\F\I\k\P\F ]] 00:38:16.954 20:16:17 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:16.954 20:16:17 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.954 20:16:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.jIRBt5JwB5 == \/\t\m\p\/\t\m\p\.\j\I\R\B\t\5\J\w\B\5 ]] 00:38:16.954 20:16:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:16.954 20:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.214 20:16:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:17.214 20:16:17 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:17.214 20:16:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:17.214 20:16:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:17.214 20:16:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:17.214 20:16:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:17.214 20:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.475 20:16:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:17.475 20:16:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:17.475 20:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:17.475 [2024-11-26 20:16:18.192470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:17.475 nvme0n1 00:38:17.736 20:16:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.736 20:16:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:17.736 20:16:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:17.736 20:16:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:17.737 20:16:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:17.737 20:16:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:17.737 20:16:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:17.737 20:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:17.998 20:16:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:17.998 20:16:18 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:17.998 Running I/O for 1 seconds... 00:38:19.213 19474.00 IOPS, 76.07 MiB/s 00:38:19.213 Latency(us) 00:38:19.213 [2024-11-26T19:16:20.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.213 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:19.213 nvme0n1 : 1.00 19520.27 76.25 0.00 0.00 6544.55 2757.97 12397.23 00:38:19.213 [2024-11-26T19:16:20.034Z] =================================================================================================================== 00:38:19.213 [2024-11-26T19:16:20.034Z] Total : 19520.27 76.25 0.00 0.00 6544.55 2757.97 12397.23 00:38:19.213 { 00:38:19.213 "results": [ 00:38:19.213 { 00:38:19.213 "job": "nvme0n1", 00:38:19.213 "core_mask": "0x2", 00:38:19.213 "workload": "randrw", 00:38:19.213 "percentage": 50, 00:38:19.213 "status": "finished", 00:38:19.213 "queue_depth": 128, 00:38:19.213 "io_size": 4096, 00:38:19.213 "runtime": 1.004238, 00:38:19.213 "iops": 19520.27308267562, 00:38:19.213 "mibps": 76.25106672920164, 00:38:19.213 "io_failed": 0, 00:38:19.213 "io_timeout": 0, 00:38:19.213 "avg_latency_us": 6544.550827254332, 00:38:19.213 "min_latency_us": 2757.9733333333334, 00:38:19.213 "max_latency_us": 12397.226666666667 00:38:19.213 } 00:38:19.213 ], 00:38:19.213 "core_count": 1 00:38:19.213 } 00:38:19.213 20:16:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:19.213 20:16:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.213 20:16:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.475 20:16:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:19.475 20:16:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:19.475 20:16:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:19.475 20:16:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.475 20:16:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.475 20:16:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.475 20:16:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.737 20:16:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:19.737 20:16:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:19.737 20:16:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:19.737 20:16:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:19.737 [2024-11-26 20:16:20.550217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:19.737 [2024-11-26 20:16:20.550618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895c50 (107): Transport endpoint is not connected 00:38:19.737 [2024-11-26 20:16:20.551614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895c50 (9): Bad file descriptor 00:38:19.737 [2024-11-26 20:16:20.552616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:19.737 [2024-11-26 20:16:20.552623] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:19.737 [2024-11-26 20:16:20.552629] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:19.737 [2024-11-26 20:16:20.552635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:19.998 request: 00:38:19.998 { 00:38:19.998 "name": "nvme0", 00:38:19.998 "trtype": "tcp", 00:38:19.998 "traddr": "127.0.0.1", 00:38:19.998 "adrfam": "ipv4", 00:38:19.998 "trsvcid": "4420", 00:38:19.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.998 "prchk_reftag": false, 00:38:19.998 "prchk_guard": false, 00:38:19.998 "hdgst": false, 00:38:19.998 "ddgst": false, 00:38:19.998 "psk": "key1", 00:38:19.998 "allow_unrecognized_csi": false, 00:38:19.998 "method": "bdev_nvme_attach_controller", 00:38:19.998 "req_id": 1 00:38:19.998 } 00:38:19.998 Got JSON-RPC error response 00:38:19.998 response: 00:38:19.998 { 00:38:19.998 "code": -5, 00:38:19.998 "message": "Input/output error" 00:38:19.998 } 00:38:19.998 20:16:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:19.998 20:16:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:19.998 20:16:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:19.998 20:16:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:19.998 20:16:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.998 20:16:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:19.998 20:16:20 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.998 20:16:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.259 20:16:20 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:20.259 20:16:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:20.259 20:16:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:20.519 20:16:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:20.519 20:16:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:20.519 20:16:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:20.519 20:16:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:20.519 20:16:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.780 20:16:21 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:20.780 20:16:21 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.kkv9SFIkPF 00:38:20.780 20:16:21 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.780 20:16:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:20.780 20:16:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:21.040 [2024-11-26 20:16:21.684555] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kkv9SFIkPF': 0100660 00:38:21.040 [2024-11-26 20:16:21.684574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:21.040 request: 00:38:21.040 { 00:38:21.040 "name": "key0", 00:38:21.040 "path": "/tmp/tmp.kkv9SFIkPF", 00:38:21.040 "method": "keyring_file_add_key", 00:38:21.040 "req_id": 1 00:38:21.040 } 00:38:21.040 Got JSON-RPC error response 00:38:21.040 response: 00:38:21.040 { 00:38:21.040 "code": -1, 00:38:21.040 "message": "Operation not permitted" 00:38:21.040 } 00:38:21.040 20:16:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:21.040 20:16:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:21.040 20:16:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:21.040 20:16:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:21.040 20:16:21 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.kkv9SFIkPF 00:38:21.041 20:16:21 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:21.041 20:16:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kkv9SFIkPF 00:38:21.302 20:16:21 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.kkv9SFIkPF 00:38:21.302 20:16:21 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:21.302 20:16:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.302 20:16:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.302 20:16:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.302 20:16:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.302 20:16:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.302 20:16:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:21.302 20:16:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.302 20:16:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.302 20:16:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.564 [2024-11-26 20:16:22.253998] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.kkv9SFIkPF': No such file or directory 00:38:21.564 [2024-11-26 20:16:22.254013] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:21.564 [2024-11-26 20:16:22.254027] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:21.564 [2024-11-26 20:16:22.254032] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:21.564 [2024-11-26 20:16:22.254038] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:21.564 [2024-11-26 20:16:22.254043] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:21.564 request: 00:38:21.564 { 00:38:21.564 "name": "nvme0", 00:38:21.564 "trtype": "tcp", 00:38:21.564 "traddr": "127.0.0.1", 00:38:21.564 "adrfam": "ipv4", 00:38:21.564 "trsvcid": "4420", 00:38:21.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:21.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:21.564 "prchk_reftag": false, 00:38:21.564 "prchk_guard": false, 00:38:21.564 "hdgst": false, 00:38:21.564 "ddgst": false, 00:38:21.564 "psk": "key0", 00:38:21.564 "allow_unrecognized_csi": false, 00:38:21.564 "method": "bdev_nvme_attach_controller", 00:38:21.564 "req_id": 1 00:38:21.564 } 00:38:21.564 Got JSON-RPC error response 00:38:21.564 response: 00:38:21.564 { 00:38:21.564 "code": -19, 00:38:21.564 "message": "No such device" 00:38:21.564 } 00:38:21.564 20:16:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:21.564 20:16:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:21.564 20:16:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:21.564 20:16:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:21.564 20:16:22 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:21.564 20:16:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:21.825 20:16:22 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:21.825 20:16:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:21.825 20:16:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:21.825 20:16:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.X2wCCDHyFk 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:21.826 20:16:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X2wCCDHyFk 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.X2wCCDHyFk 00:38:21.826 20:16:22 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.X2wCCDHyFk 00:38:21.826 20:16:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X2wCCDHyFk 00:38:21.826 20:16:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X2wCCDHyFk 00:38:22.086 20:16:22 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.086 20:16:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.086 nvme0n1 00:38:22.347 20:16:22 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:22.347 20:16:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.347 20:16:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.347 20:16:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.347 20:16:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.347 20:16:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.347 20:16:23 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:22.347 20:16:23 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:22.347 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:22.607 20:16:23 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:22.607 20:16:23 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:22.607 20:16:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.607 20:16:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.607 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.867 20:16:23 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:22.867 20:16:23 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.867 20:16:23 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:22.867 20:16:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:22.867 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:23.127 20:16:23 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:23.127 20:16:23 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:23.127 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.388 20:16:23 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:23.388 20:16:23 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X2wCCDHyFk 00:38:23.388 20:16:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X2wCCDHyFk 00:38:23.388 20:16:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jIRBt5JwB5 00:38:23.388 20:16:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jIRBt5JwB5 00:38:23.647 20:16:24 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.647 20:16:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.908 nvme0n1 00:38:23.908 20:16:24 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:23.908 20:16:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:24.170 20:16:24 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:24.170 "subsystems": [ 00:38:24.170 { 00:38:24.170 "subsystem": "keyring", 00:38:24.170 "config": [ 00:38:24.170 { 00:38:24.170 "method": "keyring_file_add_key", 00:38:24.170 "params": { 00:38:24.170 "name": "key0", 00:38:24.170 "path": "/tmp/tmp.X2wCCDHyFk" 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "keyring_file_add_key", 00:38:24.170 "params": { 00:38:24.170 "name": "key1", 00:38:24.170 "path": "/tmp/tmp.jIRBt5JwB5" 00:38:24.170 } 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "iobuf", 00:38:24.170 "config": [ 00:38:24.170 { 00:38:24.170 "method": "iobuf_set_options", 00:38:24.170 "params": { 00:38:24.170 "small_pool_count": 8192, 00:38:24.170 "large_pool_count": 1024, 00:38:24.170 "small_bufsize": 8192, 00:38:24.170 "large_bufsize": 135168, 00:38:24.170 "enable_numa": false 00:38:24.170 } 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "sock", 00:38:24.170 "config": [ 00:38:24.170 { 00:38:24.170 "method": "sock_set_default_impl", 00:38:24.170 "params": { 00:38:24.170 "impl_name": "posix" 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "sock_impl_set_options", 00:38:24.170 "params": { 00:38:24.170 "impl_name": "ssl", 00:38:24.170 "recv_buf_size": 4096, 00:38:24.170 "send_buf_size": 4096, 00:38:24.170 "enable_recv_pipe": true, 00:38:24.170 "enable_quickack": false, 00:38:24.170 "enable_placement_id": 0, 00:38:24.170 "enable_zerocopy_send_server": true, 00:38:24.170 "enable_zerocopy_send_client": false, 00:38:24.170 "zerocopy_threshold": 0, 00:38:24.170 "tls_version": 0, 00:38:24.170 "enable_ktls": false 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "sock_impl_set_options", 00:38:24.170 "params": { 00:38:24.170 "impl_name": "posix", 00:38:24.170 "recv_buf_size": 2097152, 00:38:24.170 "send_buf_size": 2097152, 00:38:24.170 "enable_recv_pipe": true, 00:38:24.170 "enable_quickack": false, 00:38:24.170 "enable_placement_id": 0, 00:38:24.170 "enable_zerocopy_send_server": true, 00:38:24.170 "enable_zerocopy_send_client": false, 00:38:24.170 "zerocopy_threshold": 0, 00:38:24.170 "tls_version": 0, 00:38:24.170 "enable_ktls": false 00:38:24.170 } 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "vmd", 00:38:24.170 "config": [] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "accel", 00:38:24.170 "config": [ 00:38:24.170 { 00:38:24.170 "method": "accel_set_options", 00:38:24.170 "params": { 00:38:24.170 "small_cache_size": 128, 00:38:24.170 "large_cache_size": 16, 00:38:24.170 "task_count": 2048, 00:38:24.170 "sequence_count": 2048, 00:38:24.170 "buf_count": 2048 00:38:24.170 } 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "bdev", 00:38:24.170 "config": [ 00:38:24.170 { 00:38:24.170 "method": "bdev_set_options", 00:38:24.170 "params": { 00:38:24.170 "bdev_io_pool_size": 65535, 00:38:24.170 "bdev_io_cache_size": 256, 00:38:24.170 "bdev_auto_examine": true, 00:38:24.170 "iobuf_small_cache_size": 128, 00:38:24.170 "iobuf_large_cache_size": 16 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_raid_set_options", 00:38:24.170 "params": { 00:38:24.170 "process_window_size_kb": 1024, 00:38:24.170 "process_max_bandwidth_mb_sec": 0 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_iscsi_set_options", 00:38:24.170 "params": { 00:38:24.170 "timeout_sec": 30 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_nvme_set_options", 00:38:24.170 "params": { 00:38:24.170 "action_on_timeout": "none", 00:38:24.170 "timeout_us": 0, 00:38:24.170 "timeout_admin_us": 0, 00:38:24.170 "keep_alive_timeout_ms": 10000, 00:38:24.170 "arbitration_burst": 0, 00:38:24.170 "low_priority_weight": 0, 00:38:24.170 "medium_priority_weight": 0, 00:38:24.170 "high_priority_weight": 0, 00:38:24.170 "nvme_adminq_poll_period_us": 10000, 00:38:24.170 "nvme_ioq_poll_period_us": 0, 00:38:24.170 "io_queue_requests": 512, 00:38:24.170 "delay_cmd_submit": true, 00:38:24.170 "transport_retry_count": 4, 00:38:24.170 "bdev_retry_count": 3, 00:38:24.170 "transport_ack_timeout": 0, 00:38:24.170 "ctrlr_loss_timeout_sec": 0, 00:38:24.170 "reconnect_delay_sec": 0, 00:38:24.170 "fast_io_fail_timeout_sec": 0, 00:38:24.170 "disable_auto_failback": false, 00:38:24.170 "generate_uuids": false, 00:38:24.170 "transport_tos": 0, 00:38:24.170 "nvme_error_stat": false, 00:38:24.170 "rdma_srq_size": 0, 00:38:24.170 "io_path_stat": false, 00:38:24.170 "allow_accel_sequence": false, 00:38:24.170 "rdma_max_cq_size": 0, 00:38:24.170 "rdma_cm_event_timeout_ms": 0, 00:38:24.170 "dhchap_digests": [ 00:38:24.170 "sha256", 00:38:24.170 "sha384", 00:38:24.170 "sha512" 00:38:24.170 ], 00:38:24.170 "dhchap_dhgroups": [ 00:38:24.170 "null", 00:38:24.170 "ffdhe2048", 00:38:24.170 "ffdhe3072", 00:38:24.170 "ffdhe4096", 00:38:24.170 "ffdhe6144", 00:38:24.170 "ffdhe8192" 00:38:24.170 ] 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_nvme_attach_controller", 00:38:24.170 "params": { 00:38:24.170 "name": "nvme0", 00:38:24.170 "trtype": "TCP", 00:38:24.170 "adrfam": "IPv4", 00:38:24.170 "traddr": "127.0.0.1", 00:38:24.170 "trsvcid": "4420", 00:38:24.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.170 "prchk_reftag": false, 00:38:24.170 "prchk_guard": false, 00:38:24.170 "ctrlr_loss_timeout_sec": 0, 00:38:24.170 "reconnect_delay_sec": 0, 00:38:24.170 "fast_io_fail_timeout_sec": 0, 00:38:24.170 "psk": "key0", 00:38:24.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.170 "hdgst": false, 00:38:24.170 "ddgst": false, 00:38:24.170 "multipath": "multipath" 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_nvme_set_hotplug", 00:38:24.170 "params": { 00:38:24.170 "period_us": 100000, 00:38:24.170 "enable": false 00:38:24.170 } 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "method": "bdev_wait_for_examine" 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }, 00:38:24.170 { 00:38:24.170 "subsystem": "nbd", 00:38:24.170 "config": [] 00:38:24.170 } 00:38:24.170 ] 00:38:24.170 }' 00:38:24.170 20:16:24 keyring_file -- keyring/file.sh@115 -- # killprocess 3991244 00:38:24.170 20:16:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3991244 ']' 00:38:24.170 20:16:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3991244 00:38:24.170 20:16:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:24.170 20:16:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.170 20:16:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3991244 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3991244' 00:38:24.171 killing process with pid 3991244 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@973 -- # kill 3991244 00:38:24.171 Received shutdown signal, test time was about 1.000000 seconds 00:38:24.171 00:38:24.171 Latency(us) 00:38:24.171 [2024-11-26T19:16:24.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.171 [2024-11-26T19:16:24.992Z] =================================================================================================================== 00:38:24.171 [2024-11-26T19:16:24.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@978 -- # wait 3991244 00:38:24.171 20:16:24 keyring_file -- keyring/file.sh@118 -- # bperfpid=3993051 00:38:24.171 20:16:24 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3993051 /var/tmp/bperf.sock 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3993051 ']' 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:24.171 20:16:24 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:24.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.171 20:16:24 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:24.171 "subsystems": [ 00:38:24.171 { 00:38:24.171 "subsystem": "keyring", 00:38:24.171 "config": [ 00:38:24.171 { 00:38:24.171 "method": "keyring_file_add_key", 00:38:24.171 "params": { 00:38:24.171 "name": "key0", 00:38:24.171 "path": "/tmp/tmp.X2wCCDHyFk" 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "keyring_file_add_key", 00:38:24.171 "params": { 00:38:24.171 "name": "key1", 00:38:24.171 "path": "/tmp/tmp.jIRBt5JwB5" 00:38:24.171 } 00:38:24.171 } 00:38:24.171 ] 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "subsystem": "iobuf", 00:38:24.171 "config": [ 00:38:24.171 { 00:38:24.171 "method": "iobuf_set_options", 00:38:24.171 "params": { 00:38:24.171 "small_pool_count": 8192, 00:38:24.171 "large_pool_count": 1024, 00:38:24.171 "small_bufsize": 8192, 00:38:24.171 "large_bufsize": 135168, 00:38:24.171 "enable_numa": false 00:38:24.171 } 00:38:24.171 } 00:38:24.171 ] 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "subsystem": "sock", 00:38:24.171 "config": [ 00:38:24.171 { 00:38:24.171 "method": "sock_set_default_impl", 00:38:24.171 "params": { 00:38:24.171 "impl_name": "posix" 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "sock_impl_set_options", 00:38:24.171 "params": { 00:38:24.171 "impl_name": "ssl", 00:38:24.171 "recv_buf_size": 4096, 00:38:24.171 "send_buf_size": 4096, 00:38:24.171 "enable_recv_pipe": true, 00:38:24.171 "enable_quickack": false, 00:38:24.171 "enable_placement_id": 0, 00:38:24.171 "enable_zerocopy_send_server": true, 00:38:24.171 "enable_zerocopy_send_client": false, 00:38:24.171 "zerocopy_threshold": 0, 00:38:24.171 "tls_version": 0, 00:38:24.171 "enable_ktls": false 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "sock_impl_set_options", 00:38:24.171 "params": { 00:38:24.171 "impl_name": "posix", 00:38:24.171 "recv_buf_size": 2097152, 00:38:24.171 "send_buf_size": 2097152, 00:38:24.171 "enable_recv_pipe": true, 00:38:24.171 "enable_quickack": false, 00:38:24.171 "enable_placement_id": 0, 00:38:24.171 "enable_zerocopy_send_server": true, 00:38:24.171 "enable_zerocopy_send_client": false, 00:38:24.171 "zerocopy_threshold": 0, 00:38:24.171 "tls_version": 0, 00:38:24.171 "enable_ktls": false 00:38:24.171 } 00:38:24.171 } 00:38:24.171 ] 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "subsystem": "vmd", 00:38:24.171 "config": [] 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "subsystem": "accel", 00:38:24.171 "config": [ 00:38:24.171 { 00:38:24.171 "method": "accel_set_options", 00:38:24.171 "params": { 00:38:24.171 "small_cache_size": 128, 00:38:24.171 "large_cache_size": 16, 00:38:24.171 "task_count": 2048, 00:38:24.171 "sequence_count": 2048, 00:38:24.171 "buf_count": 2048 00:38:24.171 } 00:38:24.171 } 00:38:24.171 ] 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "subsystem": "bdev", 00:38:24.171 "config": [ 00:38:24.171 { 00:38:24.171 "method": "bdev_set_options", 00:38:24.171 "params": { 00:38:24.171 "bdev_io_pool_size": 65535, 00:38:24.171 "bdev_io_cache_size": 256, 00:38:24.171 "bdev_auto_examine": true, 00:38:24.171 "iobuf_small_cache_size": 128, 00:38:24.171 "iobuf_large_cache_size": 16 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "bdev_raid_set_options", 00:38:24.171 "params": { 00:38:24.171 "process_window_size_kb": 1024, 00:38:24.171 "process_max_bandwidth_mb_sec": 0 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "bdev_iscsi_set_options", 00:38:24.171 "params": { 00:38:24.171 "timeout_sec": 30 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "bdev_nvme_set_options", 00:38:24.171 "params": { 00:38:24.171 "action_on_timeout": "none", 00:38:24.171 "timeout_us": 0, 00:38:24.171 "timeout_admin_us": 0, 00:38:24.171 "keep_alive_timeout_ms": 10000, 00:38:24.171 "arbitration_burst": 0, 00:38:24.171 "low_priority_weight": 0, 00:38:24.171 "medium_priority_weight": 0, 00:38:24.171 "high_priority_weight": 0, 00:38:24.171 "nvme_adminq_poll_period_us": 10000, 00:38:24.171 "nvme_ioq_poll_period_us": 0, 00:38:24.171 "io_queue_requests": 512, 00:38:24.171 "delay_cmd_submit": true, 00:38:24.171 "transport_retry_count": 4, 00:38:24.171 "bdev_retry_count": 3, 00:38:24.171 "transport_ack_timeout": 0, 00:38:24.171 "ctrlr_loss_timeout_sec": 0, 00:38:24.171 "reconnect_delay_sec": 0, 00:38:24.171 "fast_io_fail_timeout_sec": 0, 00:38:24.171 "disable_auto_failback": false, 00:38:24.171 "generate_uuids": false, 00:38:24.171 "transport_tos": 0, 00:38:24.171 "nvme_error_stat": false, 00:38:24.171 "rdma_srq_size": 0, 00:38:24.171 20:16:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:24.171 "io_path_stat": false, 00:38:24.171 "allow_accel_sequence": false, 00:38:24.171 "rdma_max_cq_size": 0, 00:38:24.171 "rdma_cm_event_timeout_ms": 0, 00:38:24.171 "dhchap_digests": [ 00:38:24.171 "sha256", 00:38:24.171 "sha384", 00:38:24.171 "sha512" 00:38:24.171 ], 00:38:24.171 "dhchap_dhgroups": [ 00:38:24.171 "null", 00:38:24.171 "ffdhe2048", 00:38:24.171 "ffdhe3072", 00:38:24.171 "ffdhe4096", 00:38:24.171 "ffdhe6144", 00:38:24.171 "ffdhe8192" 00:38:24.171 ] 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "bdev_nvme_attach_controller", 00:38:24.171 "params": { 00:38:24.171 "name": "nvme0", 00:38:24.171 "trtype": "TCP", 00:38:24.171 "adrfam": "IPv4", 00:38:24.171 "traddr": "127.0.0.1", 00:38:24.171 "trsvcid": "4420", 00:38:24.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.171 "prchk_reftag": false, 00:38:24.171 "prchk_guard": false, 00:38:24.171 "ctrlr_loss_timeout_sec": 0, 00:38:24.171 "reconnect_delay_sec": 0, 00:38:24.171 "fast_io_fail_timeout_sec": 0, 00:38:24.171 "psk": "key0", 00:38:24.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.171 "hdgst": false, 00:38:24.171 "ddgst": false, 00:38:24.171 "multipath": "multipath" 00:38:24.171 } 00:38:24.171 }, 00:38:24.171 { 00:38:24.171 "method": "bdev_nvme_set_hotplug", 00:38:24.171 "params": { 00:38:24.171 "period_us": 100000, 00:38:24.171 "enable": false 00:38:24.171 } 00:38:24.171 }, 00:38:24.172 { 00:38:24.172 "method": "bdev_wait_for_examine" 00:38:24.172 } 00:38:24.172 ] 00:38:24.172 }, 00:38:24.172 { 00:38:24.172 "subsystem": "nbd", 00:38:24.172 "config": [] 00:38:24.172 } 00:38:24.172 ] 00:38:24.172 }' 00:38:24.432 [2024-11-26 20:16:25.008818] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:38:24.432 [2024-11-26 20:16:25.008873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993051 ] 00:38:24.432 [2024-11-26 20:16:25.092609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.432 [2024-11-26 20:16:25.121878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.693 [2024-11-26 20:16:25.265818] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:25.265 20:16:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.265 20:16:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:25.265 20:16:25 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:25.265 20:16:25 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:25.265 20:16:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.265 20:16:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:25.265 20:16:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:25.265 20:16:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.265 20:16:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.265 20:16:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.265 20:16:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.265 20:16:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.525 20:16:26 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:25.525 20:16:26 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:25.525 20:16:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:25.525 20:16:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.525 20:16:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.525 20:16:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:25.525 20:16:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:25.786 20:16:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.X2wCCDHyFk /tmp/tmp.jIRBt5JwB5 00:38:25.786 20:16:26 keyring_file -- keyring/file.sh@20 -- # killprocess 3993051 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3993051 ']' 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3993051 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993051 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993051' 00:38:25.786 killing process with pid 3993051 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@973 -- # kill 3993051 00:38:25.786 Received shutdown signal, test time was about 1.000000 seconds 00:38:25.786 00:38:25.786 Latency(us) 00:38:25.786 [2024-11-26T19:16:26.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:25.786 [2024-11-26T19:16:26.607Z] =================================================================================================================== 00:38:25.786 [2024-11-26T19:16:26.607Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:25.786 20:16:26 keyring_file -- common/autotest_common.sh@978 -- # wait 3993051 00:38:26.048 20:16:26 keyring_file -- keyring/file.sh@21 -- # killprocess 3991207 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3991207 ']' 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3991207 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3991207 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3991207' 00:38:26.048 killing process with pid 3991207 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@973 -- # kill 3991207 00:38:26.048 20:16:26 keyring_file -- common/autotest_common.sh@978 -- # wait 3991207 00:38:26.308 00:38:26.308 real 0m12.112s 00:38:26.308 user 0m29.412s 00:38:26.308 sys 0m2.670s 00:38:26.308 20:16:26 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.308 20:16:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:26.308 ************************************ 00:38:26.308 END TEST keyring_file 00:38:26.308 ************************************ 00:38:26.308 20:16:26 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:26.308 20:16:26 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:26.308 20:16:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:26.308 20:16:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.308 20:16:26 -- common/autotest_common.sh@10 -- # set +x 00:38:26.308 ************************************ 00:38:26.308 START TEST keyring_linux 00:38:26.309 ************************************ 00:38:26.309 20:16:27 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:26.309 Joined session keyring: 934335253 00:38:26.309 * Looking for test storage... 00:38:26.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:26.309 20:16:27 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:26.309 20:16:27 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:26.309 20:16:27 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:26.570 20:16:27 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:26.570 20:16:27 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:26.571 20:16:27 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:26.571 20:16:27 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.571 --rc genhtml_branch_coverage=1 00:38:26.571 --rc genhtml_function_coverage=1 00:38:26.571 --rc genhtml_legend=1 00:38:26.571 --rc geninfo_all_blocks=1 00:38:26.571 --rc geninfo_unexecuted_blocks=1 00:38:26.571 00:38:26.571 ' 00:38:26.571 20:16:27 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.571 --rc genhtml_branch_coverage=1 00:38:26.571 --rc genhtml_function_coverage=1 00:38:26.571 --rc genhtml_legend=1 00:38:26.571 --rc geninfo_all_blocks=1 00:38:26.571 --rc geninfo_unexecuted_blocks=1 00:38:26.571 00:38:26.571 ' 00:38:26.571 20:16:27 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.571 --rc genhtml_branch_coverage=1 00:38:26.571 --rc genhtml_function_coverage=1 00:38:26.571 --rc genhtml_legend=1 00:38:26.571 --rc geninfo_all_blocks=1 00:38:26.571 --rc geninfo_unexecuted_blocks=1 00:38:26.571 00:38:26.571 ' 00:38:26.571 20:16:27 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:26.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.571 --rc genhtml_branch_coverage=1 00:38:26.571 --rc genhtml_function_coverage=1 00:38:26.571 --rc genhtml_legend=1 00:38:26.571 --rc geninfo_all_blocks=1 00:38:26.571 --rc geninfo_unexecuted_blocks=1 00:38:26.571 00:38:26.571 ' 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:26.571 20:16:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:26.571 20:16:27 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:26.571 20:16:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.571 20:16:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.571 20:16:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.571 20:16:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:26.571 20:16:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:26.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:26.571 20:16:27 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:26.571 20:16:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:26.571 20:16:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:26.572 /tmp/:spdk-test:key0 00:38:26.572 20:16:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:26.572 20:16:27 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:26.572 20:16:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:26.572 /tmp/:spdk-test:key1 00:38:26.572 20:16:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3993575 00:38:26.572 20:16:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3993575 00:38:26.572 20:16:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3993575 ']' 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:26.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.572 20:16:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:26.912 [2024-11-26 20:16:27.416736] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:38:26.912 [2024-11-26 20:16:27.416816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993575 ] 00:38:26.912 [2024-11-26 20:16:27.503293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.912 [2024-11-26 20:16:27.538690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:27.535 [2024-11-26 20:16:28.210103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:27.535 null0 00:38:27.535 [2024-11-26 20:16:28.242166] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:27.535 [2024-11-26 20:16:28.242515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:27.535 122056832 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:27.535 1000735281 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3993830 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3993830 /var/tmp/bperf.sock 00:38:27.535 20:16:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3993830 ']' 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:27.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.535 20:16:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:27.535 [2024-11-26 20:16:28.329055] Starting SPDK v25.01-pre git sha1 0617ba6b2 / DPDK 24.03.0 initialization... 00:38:27.535 [2024-11-26 20:16:28.329125] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3993830 ] 00:38:27.796 [2024-11-26 20:16:28.392616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.796 [2024-11-26 20:16:28.422131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.364 20:16:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.364 20:16:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:28.364 20:16:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:28.364 20:16:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:28.623 20:16:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:28.623 20:16:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:28.920 20:16:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:28.920 20:16:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:28.920 [2024-11-26 20:16:29.651382] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:28.920 nvme0n1 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:29.179 20:16:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:29.179 20:16:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:29.179 20:16:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.179 20:16:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:29.179 20:16:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@25 -- # sn=122056832 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 122056832 == \1\2\2\0\5\6\8\3\2 ]] 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 122056832 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:29.439 20:16:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:29.439 Running I/O for 1 seconds... 00:38:30.382 24008.00 IOPS, 93.78 MiB/s 00:38:30.382 Latency(us) 00:38:30.382 [2024-11-26T19:16:31.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.382 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:30.382 nvme0n1 : 1.01 24009.16 93.79 0.00 0.00 5315.73 4450.99 14854.83 00:38:30.382 [2024-11-26T19:16:31.203Z] =================================================================================================================== 00:38:30.382 [2024-11-26T19:16:31.203Z] Total : 24009.16 93.79 0.00 0.00 5315.73 4450.99 14854.83 00:38:30.382 { 00:38:30.382 "results": [ 00:38:30.382 { 00:38:30.382 "job": "nvme0n1", 00:38:30.382 "core_mask": "0x2", 00:38:30.382 "workload": "randread", 00:38:30.382 "status": "finished", 00:38:30.382 "queue_depth": 128, 00:38:30.382 "io_size": 4096, 00:38:30.382 "runtime": 1.005283, 00:38:30.382 "iops": 24009.159609781524, 00:38:30.382 "mibps": 93.78577972570908, 00:38:30.382 "io_failed": 0, 00:38:30.382 "io_timeout": 0, 00:38:30.382 "avg_latency_us": 5315.733227267705, 00:38:30.382 "min_latency_us": 4450.986666666667, 00:38:30.382 "max_latency_us": 14854.826666666666 00:38:30.382 } 00:38:30.382 ], 00:38:30.382 "core_count": 1 00:38:30.382 } 00:38:30.642 20:16:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:30.642 20:16:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:30.643 20:16:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:30.643 20:16:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:30.643 20:16:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:30.643 20:16:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:30.643 20:16:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:30.643 20:16:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.903 20:16:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:30.903 20:16:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:30.903 20:16:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:30.903 20:16:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.903 20:16:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:30.903 20:16:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:31.163 [2024-11-26 20:16:31.771460] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:31.163 [2024-11-26 20:16:31.771561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d009e0 (107): Transport endpoint is not connected 00:38:31.163 [2024-11-26 20:16:31.772557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d009e0 (9): Bad file descriptor 00:38:31.163 [2024-11-26 20:16:31.773559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:31.163 [2024-11-26 20:16:31.773566] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:31.163 [2024-11-26 20:16:31.773572] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:31.163 [2024-11-26 20:16:31.773578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:31.163 request: 00:38:31.163 { 00:38:31.163 "name": "nvme0", 00:38:31.163 "trtype": "tcp", 00:38:31.163 "traddr": "127.0.0.1", 00:38:31.163 "adrfam": "ipv4", 00:38:31.163 "trsvcid": "4420", 00:38:31.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.163 "prchk_reftag": false, 00:38:31.163 "prchk_guard": false, 00:38:31.163 "hdgst": false, 00:38:31.163 "ddgst": false, 00:38:31.164 "psk": ":spdk-test:key1", 00:38:31.164 "allow_unrecognized_csi": false, 00:38:31.164 "method": "bdev_nvme_attach_controller", 00:38:31.164 "req_id": 1 00:38:31.164 } 00:38:31.164 Got JSON-RPC error response 00:38:31.164 response: 00:38:31.164 { 00:38:31.164 "code": -5, 00:38:31.164 "message": "Input/output error" 00:38:31.164 } 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@33 -- # sn=122056832 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 122056832 00:38:31.164 1 links removed 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@33 -- # sn=1000735281 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1000735281 00:38:31.164 1 links removed 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3993830 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3993830 ']' 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3993830 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993830 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993830' 00:38:31.164 killing process with pid 3993830 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@973 -- # kill 3993830 00:38:31.164 Received shutdown signal, test time was about 1.000000 seconds 00:38:31.164 00:38:31.164 Latency(us) 00:38:31.164 [2024-11-26T19:16:31.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.164 [2024-11-26T19:16:31.985Z] =================================================================================================================== 00:38:31.164 [2024-11-26T19:16:31.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@978 -- # wait 3993830 00:38:31.164 20:16:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3993575 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3993575 ']' 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3993575 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.164 20:16:31 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3993575 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3993575' 00:38:31.424 killing process with pid 3993575 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 3993575 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 3993575 00:38:31.424 00:38:31.424 real 0m5.219s 00:38:31.424 user 0m9.772s 00:38:31.424 sys 0m1.379s 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.424 20:16:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:31.424 ************************************ 00:38:31.424 END TEST keyring_linux 00:38:31.424 ************************************ 00:38:31.685 20:16:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:31.685 20:16:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:31.685 20:16:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:31.685 20:16:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:31.685 20:16:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:31.685 20:16:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:31.685 20:16:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:31.685 20:16:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:31.685 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:38:31.685 20:16:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:31.685 20:16:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:31.685 20:16:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:31.685 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:38:39.827 INFO: APP EXITING 00:38:39.827 INFO: killing all VMs 00:38:39.827 INFO: killing vhost app 00:38:39.827 WARN: no vhost pid file found 00:38:39.827 INFO: EXIT DONE 00:38:43.152 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:43.152 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:43.152 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:47.363 Cleaning 00:38:47.363 Removing: /var/run/dpdk/spdk0/config 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:47.363 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:47.363 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:47.363 Removing: /var/run/dpdk/spdk1/config 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:47.363 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:47.363 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:47.363 Removing: /var/run/dpdk/spdk2/config 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:47.363 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:47.363 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:47.363 Removing: /var/run/dpdk/spdk3/config 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:47.363 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:47.363 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:47.363 Removing: /var/run/dpdk/spdk4/config 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:47.363 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:47.363 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:47.363 Removing: /dev/shm/bdev_svc_trace.1 00:38:47.363 Removing: /dev/shm/nvmf_trace.0 00:38:47.363 Removing: /dev/shm/spdk_tgt_trace.pid3382923 00:38:47.363 Removing: /var/run/dpdk/spdk0 00:38:47.363 Removing: /var/run/dpdk/spdk1 00:38:47.363 Removing: /var/run/dpdk/spdk2 00:38:47.363 Removing: /var/run/dpdk/spdk3 00:38:47.363 Removing: /var/run/dpdk/spdk4 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3381298 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3382923 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3383846 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3385177 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3385576 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3387239 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3387578 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3387944 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3389215 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3390080 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3390465 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3390901 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3391286 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3391648 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3392004 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3392426 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3392857 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3394199 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3398158 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3398203 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3398602 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3398927 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3399346 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3399461 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3399993 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3400110 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3400470 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3400573 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3400871 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3401043 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3401675 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3401903 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3402222 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3407619 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3413396 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3430846 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3432180 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3439096 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3439798 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3446920 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3457050 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3462267 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3478916 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3493135 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3497039 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3498845 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3523790 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3528546 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3586424 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3592816 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3599998 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3607883 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3607885 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3608889 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3609899 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3610910 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3611583 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3611589 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3611918 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3611936 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3611964 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3613023 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3614078 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3615147 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3615766 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3615888 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3616134 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3617501 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3618796 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3629365 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3663630 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3669283 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3671264 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3673603 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3673850 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3674038 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3674316 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3675024 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3677350 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3678463 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3679172 00:38:47.363 Removing: /var/run/dpdk/spdk_pid3681876 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3682586 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3683303 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3688357 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3695067 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3695068 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3695069 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3699756 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3710021 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3715410 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3722655 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3724124 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3725863 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3727504 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3733197 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3738381 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3743367 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3752468 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3752522 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3757816 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3757974 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3758204 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3758836 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3758864 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3764247 00:38:47.364 Removing: /var/run/dpdk/spdk_pid3765072 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3770932 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3774171 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3780876 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3787421 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3797672 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3806049 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3806091 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3829768 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3830452 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3831148 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3831993 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3832986 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3833800 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3834571 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3835258 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3840367 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3840673 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3848010 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3848160 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3854726 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3859880 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3871870 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3872603 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3877764 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3878143 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3883183 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3889910 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3892987 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3905143 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3915745 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3917683 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3918827 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3939016 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3943735 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3946917 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3954689 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3954695 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3960664 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3963087 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3965306 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3966808 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3969086 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3970633 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3981048 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3981717 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3982384 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3985331 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3985851 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3986350 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3991207 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3991244 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3993051 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3993575 00:38:47.625 Removing: /var/run/dpdk/spdk_pid3993830 00:38:47.625 Clean 00:38:47.886 20:16:48 -- common/autotest_common.sh@1453 -- # return 0 00:38:47.886 20:16:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:47.886 20:16:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.886 20:16:48 -- common/autotest_common.sh@10 -- # set +x 00:38:47.886 20:16:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:47.886 20:16:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.886 20:16:48 -- common/autotest_common.sh@10 -- # set +x 00:38:47.886 20:16:48 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:47.886 20:16:48 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:47.886 20:16:48 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:47.886 20:16:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:47.886 20:16:48 -- spdk/autotest.sh@398 -- # hostname 00:38:47.886 20:16:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:48.146 geninfo: WARNING: invalid characters removed from testname! 00:39:14.730 20:17:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:16.641 20:17:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:18.022 20:17:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:19.934 20:17:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:21.844 20:17:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.227 20:17:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.138 20:17:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:25.138 20:17:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:25.138 20:17:25 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:25.138 20:17:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:25.138 20:17:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:25.138 20:17:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:25.138 + [[ -n 3295863 ]] 00:39:25.138 + sudo kill 3295863 00:39:25.149 [Pipeline] } 00:39:25.164 [Pipeline] // stage 00:39:25.172 [Pipeline] } 00:39:25.188 [Pipeline] // timeout 00:39:25.194 [Pipeline] } 00:39:25.210 [Pipeline] // catchError 00:39:25.216 [Pipeline] } 00:39:25.233 [Pipeline] // wrap 00:39:25.241 [Pipeline] } 00:39:25.254 [Pipeline] // catchError 00:39:25.263 [Pipeline] stage 00:39:25.264 [Pipeline] { (Epilogue) 00:39:25.279 [Pipeline] catchError 00:39:25.281 [Pipeline] { 00:39:25.297 [Pipeline] echo 00:39:25.299 Cleanup processes 00:39:25.306 [Pipeline] sh 00:39:25.593 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:25.593 4006855 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:25.607 [Pipeline] sh 00:39:25.894 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:25.894 ++ grep -v 'sudo pgrep' 00:39:25.894 ++ awk '{print $1}' 00:39:25.894 + sudo kill -9 00:39:25.894 + true 00:39:25.906 [Pipeline] sh 00:39:26.193 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:38.464 [Pipeline] sh 00:39:38.753 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:38.753 Artifacts sizes are good 00:39:38.769 [Pipeline] archiveArtifacts 00:39:38.777 Archiving artifacts 00:39:38.948 [Pipeline] sh 00:39:39.363 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:39.377 [Pipeline] cleanWs 00:39:39.386 [WS-CLEANUP] Deleting project workspace... 00:39:39.386 [WS-CLEANUP] Deferred wipeout is used... 00:39:39.393 [WS-CLEANUP] done 00:39:39.395 [Pipeline] } 00:39:39.407 [Pipeline] // catchError 00:39:39.420 [Pipeline] sh 00:39:39.708 + logger -p user.info -t JENKINS-CI 00:39:39.718 [Pipeline] } 00:39:39.731 [Pipeline] // stage 00:39:39.737 [Pipeline] } 00:39:39.751 [Pipeline] // node 00:39:39.756 [Pipeline] End of Pipeline 00:39:39.797 Finished: SUCCESS